You wake up early to the sound of an alarm that only exists inside your own head, and it is silenced automatically as soon as you’re awake. You get up to shower and the water is already running, at exactly the temperature you like. While drinking your coffee you receive a message from a perspective client that they would like to move the meeting forward and whether this was possible. Your response without the need to check whether you have availability or what else you are doing with the day – perfect clarity about your diary simply appears as if you had perfect photographic recall. You confirm your software is updated and begin prepping for your meetings that day. Your meetings are a mix of in person and virtual, but you can join them in the metaverse with the virtual worlds simply appearing to you without the need for a headset. At no point was a phone or computer accessed. Instead, you have a small device directly connected to your brain that allows you to interact with your other devices, the internet, and virtual worlds, with a simple thought.
Until relatively recently, the thought of body augmenting technology was one of far-future science fiction. The protagonist, often involuntarily augmented, able to perform feats of strength, agility and cognition far beyond their peers. But fiction may soon be fact, as several organisations are making significant progress in developing devices designed to augment the capabilities of society, often referred to as ‘neurotechnology’.
In August 2022, the Law Society of England and Wales, published a report with Dr Allan McCay on how these technologies may impact society and the practice of law. The report considers what exactly neurotechnology is, its potential to impact society, and the likely challenges and opportunities faced by the legal profession and practising of law.
Here we consider several matters raised in the report and reflect upon them in the context of a developing regulatory background of AI (since many neurotechnologies rely on AI within the brain-machine interface) and technology across the world. We highlight several potential concerns that organisations may face when involving themselves in the development, production, distribution and use of neurotechnology.
What is Neurotechnology?
Neurotechnology is the category of devices that interact with, monitor and modulate a person’s brain or nervous system. The canonical sci-fi variant is the neural lace described in Ian M Banks’ ‘Culture’ series (there’s more on this below), but similar techno-telepathy systems have featured in many other series. Amongst the real-life variants being worked on, some would be directly implanted into the brain of the user, as in the case with neurostimulators in their treatment of Parkinson’s disease. Others are more akin to sophisticated wearables, such as those used to interact with the metaverse and other computer-based software.
In essence, neurotechnology is about sending signals to and receiving signals from the brain. This can be literally at the level of direct electrical connections to neurons within the user’s (or in some cases, patient’s) brain. The technology can then, depending on its purpose, read and/or write signals from and to the person’s brain and nervous system. One example of ‘read’ capabilities described in the report is with patients suffering from locked-in syndrome. In this instance, a person may have a device implanted into their head (or a non-invasive headset) that ‘reads’ and interprets the signals of a person’s brain and translates them into signals and that can be interpreted by the device and other connected technology. For example, those using the technology may be able to have a computer translate the electromagnetic patterns created by certain thoughts into movements of a cursor on screen, giving that patient an avenue for significantly creative expression of will and control over their environment. In the case of ‘write’ capabilities, a notable example can be found in the neurotechnology applied in the treatment of Parkinson’s disease, where corrective signals combined with those coming from the patient’s brain, provide it an artificial ‘software patch’ of sorts, allowing the patient to better control their symptoms.
Read/write capabilities, however, are not exclusively useful in the treatment of disease and the re-granting of communication. Much in the way of upgrading your computer components, the augmentation of a person’s brain and nervous system has the possibility to offer a great deal more in terms of application.
Applications of Neurotechnology
Medical
The use of neurotechnology in a medical context has long been a focus of research. As noted above, neurotechnology is already allowing us to provide hope to patients with several classes of neurodegenerative diseases. As the report notes, it is not possible to consider every application of neurotechnology in medicine as, quite simply, the use cases are already too extensive. In development, at the time of writing, are even more advanced devices in the form of auditory or visual aids and those targeted at improving memory or reducing the symptoms of neurodegenerative illnesses. In the not-too-distant future, it is easy to imagine neurotechnology providing solutions for complicated mental health issues such as chronic depression. Given how many people are affected by such issues at some point in their life, it is easy to imagine that neurotechnologies used to manage such conditions could become widespread.
Military
More thought of, but less applied (at least currently…) is the use of neurotechnology in non-medicinal applications in the context of the armed forces. The report cites a paper, published by the Ministry of Defence in 2021, that provides a coherent and succinct example of why governments and the defence sector are so interested in the use of neurotechnology.
“[I]n terms of augmentation, brain interfaces could: enhance concentration and memory function; lead to new forms of collaborative intelligence; or even allow new skills or knowledge to simply by ‘downloaded’. Manipulating the physical world by thoughts alone would also be possible; anything from a door handle to an aircraft could in theory and more recently in practice, be controlled from anywhere in the world.”
The rapid ability to prepare (or even ‘upgrade’) soldiers for battle, may offer any number of advantages over adversaries. It is therefore of little surprise that such interest has been directed at its further development. Critics point to the serious ethical questions posed if a consequence of neurotechnologically-augmented soldiers is the compromise of a soldier’s free will. “I was only following orders” is already a hollow defence to accusations of war crimes, but a solider who can no longer choose to disobey under any circumstances prevents a nightmarish vision.
Personal and Professional
The ‘upgrade’ of a person as a means of some form of competitive advantage is by no means restricted to military application. In everyday life, the ability to remember more, performs tasks more rapidly, or learn skills more quickly, is of equal interest to society. In a world where employers vie for the best talent, and places at the best universities are oversubscribed, the ability to have a secret ace up one’s sleeve is of undeniable interest. However, if such technologies are only accessible to a privileged few, those in less affluent socio-economic groups are at an ever greater disadvantage: neural enhancements could be a means by which social mobility is stifled further.
Not every use of neurotechnology has to read like the plot to an episode of ‘Black Mirror’ however, and many uses could prove socially beneficial. The report, for example, describes the use of technology to monitor cognitive states of those in high-pressure jobs, such as air-traffic controllers, and spot when employees are stressed or unalert. This would allow employers to justify breaks throughout the day and ensure the wellbeing of their staff is maintained.
Plunging into the vortex
As with all novel technology, use of neural enhancement and brain-machine interfaces requires a fine balance to be struck. Failure to consider appropriately the consequences of use at an early stage may result in misuse or function creep. Should appropriate measures not be put in place at the outset, we may quickly find ourselves in an uncontrollable spiral from which we may not be able to easily emerge.
Privacy and confidential information
One of the primary concerns is privacy. Even with today’s technology, the ability to tap into, and accumulate, data direct from a person, whether this includes biological markers (such as risk of disease) or signals (such as responses to external stimuli), inhibits a person’s ability to restrict otherwise internal and private information from being exposed to others.
Insurers, for example, may be able to use this additional information to contribute to their determination of a person’s risk of neurological issues in a more intrusive manner than before. Similarly, it would not be too great a stretch of the imagination to think that data obtained from these devices could be used to predict behavioural outcomes, and even manipulate and engineer an individual’s behaviour in a particular direction.
Potentially more concerning, however, is the creation of additional vectors for surveillance. The augmentation of persons (either by corporations or governments) would make it far easier to gain insight into those individuals’ mental states and confidential information, whether this be over a continuous period or in response to stimuli. While, as noted above, there may be advantages to this in certain high-pressure environments, the ability to monitor a person’s internal responses may result in ‘unintended’ outcomes. This intrusive monitoring of response to stimuli could equally be applied in a marketing or governmental application, where companies and candidates could amend campaigns and policies based on the internal responses of those transmitting data. A significant and related concern of all of this is the potential for espionage and stealing of confidential information and trade secrets.
Even further into the future, it is perfectly conceivable that technologies could give direct access to thoughts or memories. It is uncomfortable enough thinking malware might give hackers control of your laptop; the idea that the interface to your brain might be compromised is utterly terrifying.
We must therefore consider whether the current regulatory frameworks which protect privacy are adequate to address body augmentation technology and the growing potential for otherwise private responses to be utilised for other means. Even more fundamentally, we should seek to address whether this is an acceptable ancillary use of neurotechnology and ways in which risks may be mitigated.
Safety
It is equally of concern that the devices with which people are augmenting themselves are safe for the user and those around them. It is unlikely that people would be willing, for medical purposes or otherwise, to accept unnecessary risks arising from body augmentation, and would want assurances that a reasonable standard of safety is assured. After all, if a device is interfering with your brain, you would want to be very that any potential for damage to the seat of consciousness should be infinitesimal. Abase principle of their use, therefore, is that neurotechnology should be held to a high standard through its lifecycle. This would include during design and development, use and maintenance and continue to the decommissioning and removal of the devices. This may include technical methods, such as certification of parts used, as seen in the case of medical devices, or through procedural intervention, such as analysis of results and algorithm reviews for bias and the prompt investigation of any faulty results.
Safety of the device should also consider its capability of being accessing by third parties, particularly those of a more malevolent nature. For a device to be deemed ‘safe’ it should include features preventing unauthorised access to the data and its features from a hardware (such as external access locks) and software (such as firewalls) perspective. We’ve touched above on the nightmare scenarios of having your thoughts and memories hacked. The specifics of protection from external threats would likely take a risk-based approach, based on the invasiveness of the device and information, but a minimum standard of protection would be expected, with devices capable of access to thought or memory requiring proportionally higher standards of security.
Ethical and Fair Use
The concern around the ethics of augmentation and neurotechnology has been explored both in serious academic papers and speculative fiction alike. One of the more colourful is to be found in Iain M Bank’s novel ‘Excession‘, where a powerful AI starship mind called the ‘Grey Area‘ is castigated by its peers for its habit of reading the minds of others without their consent. In one memorable passage, the Grey Area chillingly explains to another, rather naïve, character that her ‘neural lace’, the name given to brain-machine interface technology in the novel, could be deployed as the single most effective torture device for organic living beings ever invented.
Whilst a gruesome prospect, this does illustrate that the power to directly manipulate the signals within the brain creates quite a different class of ethical challenge. The issues stretch from the social impacts that might arise if only a privileged few have access to the advantages granted by neurotechnologies, through to nightmarish visions of mind control and the removal of privacy of thought, that make George Orwell’s ‘1984’ sound like a libertarian utopia.
Having painted a dark picture of the ethical risks, we should remember there are also ethical challenges if the technology is not explored. As we considered in the medical context above, brain-machine interfaces are already showing enormous potential to treat medical conditions that rob people of freedom and dignity. To abandon a technology that has such promise would raise ethical questions of its own. Medical use isn’t without its risks and challenges (even the best-intentional treatments might change the patient’s personality in ways they might not wish for), but it certainly presents a particularly good example of the benefits we might realise from these technologies. An even more techno-utopian vision posits that brain-machine interfaces could facilitate voluntary sharing of thoughts without the barrier of language getting in the way, leading to more widespread empathy and understanding. In this narrative, neurotechnology is a vital step toward a more connected and more collaborative version of the human species. While some might read this as a positive development, it might sound nightmarish in its own way to those with a strongly independent view of themselves.
Given this wide array of ethical concerns, it is clear that regulatory frameworks will need to be developed and enhanced to address the unique challenges posed by these technologies.
Regulation of neurotechnology
Neurotechnology interacts with several areas of law across different sectors – life sciences, technology, medical devices, software, AI, and data to name just a few. The approach to its regulation, unsurprisingly, is therefore equally varied. Separate aspects, such as software or medical approval, for example, each have their own regulatory regime that must be followed to ensure the technology is compliant.
US Perspective
In the US, the Food and Drug Administration is an important regulator of neurotechnology. Many neurotechnologies would be regulated for safety and efficacy under the FDA’s existing medical device regulations[1]. Most existing neurological devices are classified as class II (moderate risk) or class III (high risk) devices and undergo some form of pre-market review by the FDA. Any neurotechnologies that do not fall under the FDA’s purview are more likely to raise questions around whether there are sufficient safeguards to ensure user safety, as well as data security and fairness issues that the FDA is increasingly scrutinising in relation to digital technologies.
Various other US government agencies would likely assert oversight of neurotechnologies, including the Federal Trade Commission (from a consumer protection perspective), the Department of Defence (if used with military applications) and sector regulators depending on the context of use (for instance, if used by air traffic controllers as in the example described above, we might expect the Federal Aviation Administration to take an interest). From an information and data protection perspective, presently a patchwork of privacy laws would apply, although moves are afoot for a Federal privacy law to be enacted. None are specifically geared at protecting brain data, but data collected by neurotechnologies could initiate various existing federal and state-level privacy laws (including the Health Insurance Portability and Accountability Act (HIPAA), the Health Information Technology for Economic and Clinical Health Act (HITECH), the California Consumer Privacy Act (CCPA), California Privacy Rights Act (CPRA) and Illinois Biometric Information Privacy Act (BIPA) depending on the information being collected and by whom.
Various trade secret laws could also come into play such as Defense Trade Secrets Act as well as the Economic Espionage Act of 1996 which provides for criminal sanctions for theft or misappropriation of trade secrets.
EU and UK Perspectives
A similar analysis applies in the EU and UK. Any neurotechnology would be regarded as a medical device for the purposes of the Medical Devices Regulation (EU) 2017/745, which still applies for now in post-Brexit UK. Under that regime, any neurotechnology would have to be checked against relevant standards and CE or UKCA marked. Any embedded neurotechnologies would also require patients/users to have implant cards describing the implanted device.
Similarly, we would expect other regulations to be engaged. Devices are likely to contain machine learning components that would be caught by the EU AI Regulation (more on that below) and could be subject to export controls as controlled technologies. Defence use cases would trigger the application of relevant regimes from that sphere, and sector use cases would be subject to specific sector regulation.
One big area of difference between any EU/UK approach and the US would be in the field of privacy. The EU and UK flavours of data protection regulation (GDPR and the fast-evolving post-Brexit data landscape in the UK) cast a long shadow. These would provide some additional reassurance to EU- and UK-based neurotechnology users that some of the more egregious and privacy-invasive secondary uses of neurotechnologies would not be as prevalent in those territories.
As with the US, European Law (including Directive 2016/943) and UK Trade Secrets Regulations would need to be considered to ensure that body augmenting technology, provides protection against unlawful acquisition, use and disclosure of undisclosed know-how and business information.
Regulating AI in neurotechnology
For devices of a more complex and technical nature, regulations relating to artificial intelligence will also likely come into play. As of yet, no established regime has been finalised, and many jurisdictions are investigating the best way to approach the issue. Though still in draft, one of the most developed regimes is that of the EU, in the form of the draft AI Regulation (the “AI Act“). The AI Act defines what types of AI systems fall within its remit and what it deems are prohibited uses of AI, which use cases are high risk, and therefore by default which are low risk. Since high-risk AI includes anything used as a safety component within a device subject to EU harmonised standards legislation (a long-winded way of saying “CE marked devices”), neurotechnology will almost certainly also be a high-risk AI use. This means that those seeking to take advantage of neurotechnology, whether as a user or vendor, will be subject to compliance with several, often strictly worded, safety and monitoring requirements. Based on the most recent drafts of the AI Act, failure to ensure this is done correctly may result in the breaching party receiving fines of up to €30 million or 6% of global revenue.
The UK, comparatively, has currently proposed a more distributed approach, whereby sector-specific regulatory standards, and industry regulators, rather than a single regime, are to regulate AI systems. Users and vendors of neurotechnology will therefore be required to ensure they comply with specific regulations and standards as they apply to the context of their use. Oversight of compliance is currently expected to be delegated to the individual regulators, such as the MHRA for medical devices, Ofcom to the extent wireless technologies are used etc., though the specifics of how this is expected to be achieved is yet to be established.
In the US, Congress is considering the American Data Privacy and Protection Act (ADPPA – HR 8152), which aims to create a comprehensive national data privacy and security framework by establishing standards on what types of data companies can gather from individuals and how that information can be used. Notably, a section of the pending bill seeks to take a significant step toward a federal enforcement mechanism over how businesses design and employ algorithms, and the underlying data used to support them. While it is challenging to predict what legislation may or may not become law, a continuous emerging theme, from both sides of the aisle, is the increased attention to algorithms among policymakers as well as the growing role they play in our lives. While approaches may vary, what remains clear is the regulator recognition that neurotechnology, particular that propelled by AI, requires a careful and considered approach. Regulations must be put in place for the protection of its users, while at the same time encouraging the development of technology that has the potential to substantially benefit society.
Coran Darling is a Trainee Solicitor at DLA Piper LLP with experience in data privacy, artificial intelligence, robotics, and intellectual property, having joined the firm from the technology industry. He is an active member of DLA Piper’s Working Group on AI and a committee member of the Society for Computers and Law specialist group on artificial intelligence with particular interest in algorithmic transparency.
Eliza Saunders is an intellectual property litigator with experience advising in relation to intellectual property disputes and resolution.
Keo Shaw, DLA Piper California is dual qualified in the US and the UK and has particular experience advising on regulatory issues presented by digital health technologies.
Gareth Stokes, Partner, DLA Piper focuses on AI, information technology, strategic sourcing and intellectual property-driven transactions and is part of the leadership team for DLA Piper’s global AI Practice Group.