The battle over which uses of artificial intelligence Europe should be banned

In 2019, wait on the borders of Greece, Hungary and Latvia, an artificial intelligence-driven lie detector began testing. The system, called iBorderCtrl, analyzed facial movements to try to see signs that a person was lying to a borderline agent. The trial was driven by nearly $ 5 million in research funding from the European Union, and nearly 20 years of research at Manchester Metropolitan University in the United Kingdom.

The trial has sparked controversy. Polygraphs and other technologies built to detect lies from physical characteristics have been widely declared unreliable by psychologists. Errors from iBorderCtrl were also reported soon. Media reports indicated that his lie prediction algorithm did not work, and the project’s own website acknowledged that the technology could “pose risks to fundamental human rights.”

This month, Silent Talker, a Manchester Met company that launched the technology underlying iBorderCtrl, emerged. But that’s not the end of the story. Lawyers, activists and lawmakers are pushing for a European Union law to regulate AI, which would ban systems that claim to detect human deception in migration – citing iBorderCtrl as an example of what could go wrong. Former Silent Talker executives could not be reached for comment.

A ban on AI lie detectors at borders is one of thousands of amendments to the AI ​​law being considered by officials of EU countries and members of the European Parliament. The legislation is intended to protect the fundamental rights of EU citizens, such as the right to live free from discrimination or to declare asylum. It describes some use cases of AI as “high-risk”, some “low-risk” and strikes a complete ban on others. Those who lobby to change the AI ​​law include human rights groups, unions, and companies like Google and Microsoft, who want the AI ​​law to make a distinction between those who make general AI systems, and those who deploy them. for specific uses.

Last month, advocacy groups, including European Digital Rights and the International Cooperation Platform on Undocumented Migrants, called for the law to ban the use of AI polygraphs that measure things like eye movement, tone of voice or facial expression at borders. Statewatch, a non-profit organization for civil liberties, has released an analysis that warns that the AI ​​law as written will allow the use of systems such as iBorderCtrl, which contributes to Europe’s existing “publicly funded frontier AI ecosystem.” The analysis calculated that over the past two decades, approximately half of the € 341 million ($ 356 million) in funding for the use of AI at the border, such as the profiling of migrants, has gone to private companies.

The use of AI lie detectors on borders effectively creates new immigration policies through technology, says Petra Molnar, co-director of the nonprofit Refugee Law Lab, calling everyone suspicious. “You have to prove that you are a refugee, and it is accepted that you are a liar, unless proven otherwise,” she says. “That logic underlies everything. It supports AI lie detectors, and it supports more border surveillance and repulsion. ”

Molnar, an immigration lawyer, says people often avoid eye contact with border or migration officials for harmless reasons – such as culture, religion or trauma – but doing so is sometimes misread as a sign that a person is hiding something. People often struggle with cross-cultural communication or talking to people who have experienced trauma, she says, so why would people believe that a machine can do better?