In 2019, guards at the borders of Greece, Hungary and Latvia began testing an artificial intelligence-powered lie detector. The system, called iBorderCtrl, analyzed facial movements to try to spot signs that a person was lying to a border agent. The trial was propelled by nearly $5 million in research funding from the European Union and nearly 20 years of research at Manchester Metropolitan University in the UK.
The trial sparked controversy. Polygraphs and other technologies developed to detect lies based on physical properties have been widely declared unreliable by psychologists. Soon, errors were also reported from iBorderCtrl. Media reports indicated that the lie-prediction algorithm was not working, and the project’s own website acknowledged that the technology “may pose risks to basic human rights.”
This month, Silent Talker, a Manchester Met company that created the technology behind iBorderCtrl, dissolved. But that’s not the end of the story. Lawyers, activists and lawmakers are pushing for a European Union law to regulate AI, which would ban systems that claim to detect human deception in migration — using iBorderCtrl as an example of what could go wrong. Former Silent Talker executives could not be reached for comment.
A ban on AI lie detectors at borders is one of thousands of amendments to the AI law being considered by officials from EU countries and MEPs. The legislation aims to protect the fundamental rights of EU citizens, such as the right to live free from discrimination or to apply for asylum. It labels some AI use cases as “high risk”, some “low risk”, and outright bans others. Those lobbying to change the AI law include human rights groups, unions, and companies like Google and Microsoft, who want the AI law to differentiate between those who make general-purpose AI systems and those who use them for specific purposes. use.
Last month, advocacy groups including European Digital Rights and the Platform for International Cooperation on Undocumented Migrants called for a ban on the use of AI polygraphs that measure things like eye movements, voice or facial expression at borders. Statewatch, a civil liberties nonprofit, released an analysis warning that the AI Act as written would allow the use of systems such as iBorderCtrl, which would contribute to Europe’s existing “government-funded AI ecosystem at the border”. The analysis calculated that over the past two decades, about half of the €341 million ($356 million) in funding for the use of AI at the border, such as migrant profiling, went to private companies.
Using AI lie detectors at borders effectively creates new immigration policies through technology, said Petra Molnar, associate director of the nonprofit Refugee Law Lab, which labels everyone as suspicious. “You have to prove you’re a refugee and you’ll be presumed to be a liar unless proven otherwise,” she says. “That logic underlies everything. It supports AI lie detectors and it supports more surveillance and pushback at the borders.”
Molnar, an immigration attorney, says people often avoid eye contact with border or migration officials for innocent reasons, such as culture, religion or trauma, but this is sometimes misconstrued as a signal that someone is hiding something. People often struggle with cross-cultural communication or talking to people who have experienced trauma, she says, so why should people believe a machine can do it better?
This post The battle over what use of artificial intelligence Europe should ban
was original published at “https://www.wired.com/story/europe-law-outlaw-ai/”