Science
Tapping into someone’s mind may soon be technologically possible. Our institutions are ill-equipped to deal with the resulting human rights abuses, writes Jared Genser, in part one in a series on brain-machine interfaces
It was once science fiction, but brain-machine interfaces — devices that connect a person’s brain to a computer, machine, or other device like a smartphone — are making rapid technological advances.
In science and medicine, brain-machine interfaces have revolutionized communication and mobility, enabling humans to overcome immense mental and physical challenges. They helped a man who is paralyzed and nonverbal to communicate at a rate of 18 words per minute with an accuracy of up to 94 percent; a person who is quadriplegic to drive a Formula 1 race car; and a person paralyzed to make the first kick of the World Cup with a mind-controlled robot exoskeleton. And in consumer products, CTRL-Labs developed a consumer wristband that controls your computer cursor with your mind, and Kernel’s Flow wearable helmet maps brain activity with unparalleled accuracy.
While these developments are promising, brain-machine interfaces also bring new human rights challenges. Other technology uses algorithms to extrapolate and collect data about users’ personal preferences and location, but brain-machine interfaces offer something completely different: they can connect the brain directly to machine intelligence.
Because the brain is the site of human memory, perception and personality, the interfaces between the brain and the machine pose a challenge not only to the privacy of our minds, but also to our sense of self and free will.
In 2017, the Morningside Group, made up of 25 global experts, identified five “neuro rights” to characterize how current and future neurotechnology (methods to read and record brain activity, including brain-machine interfaces) could violate human rights. These include the right to mental identity, or a “sense of self”; the right to mental agency, or “free will”; the right to mental privacy; the right to fair access to mental augmentation; and protection against algorithmic bias, such as when neurotechnology is combined with artificial intelligence. By protecting neuro-rights, societies can maximize the benefits of brain-machine interfaces and prevent abuses and abuses that violate human rights.
Brain-machine interfaces are already being abused and abused. For example, an American neurotechnology startup sent wearable headbands for tracking brain activity to a school in China, where they were used in 2019 to pay attention to students levels without permission. Furthermore, workers at a Chinese factory wore hats and helmets that claimed to use brain signals to decode their emotions. An algorithm then analyzed emotional changes that affect employee productivity.
While the accuracy of this technology is disputed, it sets a disturbing precedent. But the abuse and misuse of brain-machine interfaces could happen even in democratic societies. Some experts fear that non-invasive, or non-surgical and portable, brain-machine interfaces will one day be used by law enforcement for criminal suspects in the US and have advocated expanding constitutional doctrines to protect civil liberties.
The rise of consumer neurotechnology highlights the need for laws and regulations that reflect the advancement of technology. In the US, brain-machine interfaces that do not require implantation in the brain, such as wearable helmets and headbands, are already being marketed as consumer products with claims to support meditation and well-being, or improve learning efficiency, or improve the health of people. improve the brain. Unlike implantable devices, which are regulated as medical devices, wellness devices are consumer products and subject to minimal to no regulation.
Consumers may not be aware of the ways in which using these devices can violate their human rights and privacy rights. The data that consumer neurotechnology collects can be stored insecurely or even sold to third parties. User agreements are lengthy and technical, and contain provisions that allow companies to store users’ brain scans indefinitely and sell them to third parties without the kind of informed consent that protects the human rights of individuals. Today it is possible to interpret only part of a brain scan, but that will only increase as brain-machine interfaces evolve.
Human rights challenges posed by brain-machine interfaces need to be addressed to ensure safe and effective use. At the global level, the UN Human Rights Council, a body of 47 member states, stands ready to vote on and approve the UN’s first major study on neurorights, neurotechnology and human rights. The UN leadership on neuro rights would generate international consensus on a definition of neuro rights and drive new legal frameworks and resolutions to address them.
Expanding the interpretation of existing international human rights treaties to protect neuro rights is another important step forward.
The Neuro Rights Foundation, a US non-profit organization dedicated to the protection of human rights and the ethical development of neurotechnology, has released a report for the first time showing that existing international human rights treaties are ill-equipped to protect neuro rights. For example, the Convention Against Torture and the International Covenant on Civil and Political Rights were drafted before the advent of brain-machine interfaces and include terms and legal norms, such as “pain,” “freedom and safety of the person,” and “freedom of thought.” and conscience” that needs to be further interpreted with new language to address neuro-rights. Updating international human rights treaties would also legally require states that ratify them to create national laws that protect neuro rights.
Another important step is the development of a global code of conduct for companies that would also help create standards for collecting, storing and selling brain data. For example, if privacy of brain data becomes an “opt-out” default for consumer neurotechnology, users’ informed consent would be protected by letting them decide when their brain activity is monitored. This type of standard can be easily adopted into regulations at national and industrial level.
Simultaneous effective multilateral cooperation, national attention and industry involvement are all needed to address neuro-rights and close “protection gaps” under international human rights law. Ultimately, these approaches will help guide the ethical development of neurotechnology and, in the process, reveal the strongest avenues to prevent misuse and abuse of the technology.
Jared Genser is an adjunct law professor at Georgetown University Law Center and director of Perseus Strategies and general counsel to the Neurorights Foundation. This article was prepared with the assistance of Stephanie Herrmann, an international human rights lawyer with Perseus Strategies and the Neurorights Foundation. Professor Genser declares no conflict of interest.
Originally published under Creative Commons by means of 360 info™.