The EU must prohibit discrimination in the Artificial Intelligence Law

by time news

2023-09-28 08:00:00
Pixabay

The European Union (EU) must ban dangerous technologies based on artificial intelligence in the Artificial Intelligence LawAmnesty International said today. The Union aims to finalize the first comprehensive regulation on artificial intelligence this autumn.

Numerous states around the world have used unregulated artificial intelligence systems to evaluate applications for social benefits, monitor public spaces or determine the likelihood of someone committing a crime. These technologies are often labeled “technical solutions” to structural problems such as poverty, sexism and discrimination. They use sensitive and often staggered amounts of data fed into automated systems to decide whether or not a person should receive housing, social benefits, healthcare and education… or even be charged with a crime..

But instead of solving social problems, Artificial intelligence systems have flagrantly amplified racism and inequalities, and have perpetuated harm to human rights and discrimination.

“These systems are not used to improve people’s access to social benefits: they are used to cut costs. ”

Mher Hakobyan, Amnesty International advisor for advocacy work on the regulation of artificial intelligence.

“These systems are not used to improve people’s access to social benefits: they are used to cut costs. And where systemic racism and discrimination already exist, these technologies amplify harm to marginalized communities at a much greater scale and speed,” said Mher Hakobyan, Amnesty International’s Advocacy Adviser on AI Regulation. .

“Rather than focusing disproportionately on “existential threats” which represents artificial intelligence, the legislative bodies of the EU They should formulate laws that address existing problems, such as the fact that these technologies are used to make blatantly discriminatory decisions that undermine access to fundamental human rights.

Cruel deprivation of child care subsidies

In 2021, Amnesty International documented how an artificial intelligence system used by the Dutch tax authorities had racially profiled people receiving childcare subsidies. The tool was intended to verify whether subsidy applications were authentic or fraudulent, but The system wrongly penalized thousands of low-income and immigrant parents, plunging them into exorbitant debt and poverty.

Batya Brown, who was falsely accused by the Dutch child care system of welfare fraud, said tax authorities demanded hundreds of thousands of euros back from her, enveloping her in a web of bureaucracy and economic anxiety. . Years later, justice remains unseen.

“It was so strange. I received a letter saying that I had been mistakenly given child care subsidies. And I thought ‘how can it be?’ She was in her early twenties. I didn’t know much about the tax authorities. I found myself in this world of paperwork. I saw how everything was slipping away from me. Since we have been recognized as victims of what I call ‘subsidy crime’, even four years later, they continue to treat us like a number,” said Batya Brown.

“The Dutch childcare subsidies scandal must serve as a warning to EU legislative bodies. Using artificial intelligence systems to monitor the provision of essential subsidies can have devastating consequences for marginalized communities. The Artificial Intelligence Law must prohibit social scoring, profiling and risk assessment systems, whether used to monitor beneficiaries of social benefits, to ‘predict’ the probability of a crime being committed or to decide on requests for protection. asylum,” Mher Hakobyan added.

Prohibition of the use and export of invasive surveillance systems

Under the pretext of “national security”, facial recognition systems are becoming the reference tool for governments that intend to over-monitoring people in society. Law enforcement agencies use these systems in public spaces to identify people who may have committed a crime despite the risk of practicing. unjust arrests.

Amnesty International, which is part of a coalition of more than 155 organisations, has called for a total ban on the use of retrospective and real-time facial recognition in publicly accessible spaces is guaranteedincluding in border areas and around detention centres, by all actors, without exception, in the EU.

In places like NYHyderabad y los Occupied Palestinian Territories (OPT)Amnesty International has documented and denounced that facial recognition systems accelerate existing systems of control and discrimination.

In the OPT, Israeli authorities use facial recognition to monitor and control the Palestinian population, limiting their freedom of movement and their ability to access fundamental rights.

Amnesty International’s investigation has also revealed that cameras manufactured by the Dutch company TKH Security are being used as part of the surveillance system deployed in occupied East Jerusalem.

“In addition to ensuring a complete ban on facial recognition within the EU, Legislative bodies must ensure that this technology and other highly problematic technologies banned within the Union are not manufactured in the Union for export to countries where they are used to commit serious human rights violations.. The EU and its Member States have an obligation under international law to ensure that companies within its jurisdiction do not profit from human rights abuses by exporting technologies used for mass surveillance and racist policing”Mher Hakobyan added.

Abuses against the migrant population

EU Member States have gone increasingly resorting to the use of opaque and hostile technologies to facilitate abuses against migrants, refugees and asylum seekers at their borders.

Legislative bodies must ban racist profiling and risk assessment systems that label migrants and asylum seekers as “threats”as well as technologies that predict border movements and deny people the right to asylum.

“We don’t have to get to the point of Terminator or The Matrix for these threats to be existential”

Alex Hanna, Director of Research at the Distributed AI Research Institute (DAIR)

“Every time a person passes through an airport, crosses a border, applies for a job, they are subjected to the decisions of these models. We don’t have to get to the point of Terminator or The Matrix for these threats to be existential. For people, it is existential if it takes away their life opportunities and their livelihoods,” said Alex Hanna, Director of Research at the Distributed AI Research Institute (DAIR).

Power to self-regulate

Large technology companies have also pushed to introduce legal loopholes in the risk classification process of the Artificial Intelligence Law, which would allow technology companies themselves to determine whether their technologies should be classified as “high risk.”

“It is crucial that the EU adopts legislation on artificial intelligence that protects and promotes human rights. Granting big tech companies the power to self-regulate seriously undermines the primary goals of this law, including protecting citizens from human rights abuses. The solution is very simple: go back to the original proposal of the European Commission, which provides a clear list of situations in which the use of an artificial intelligence tool would be considered high risk,” concluded Mher Hakobyan.

Additional information

Amnesty International, as part of a coalition of civil society organizations led by the European Digital Rights Network (EDRi), has been asking for EU regulation on artificial intelligence that protects and promotes human rights, including the rights of people on the move.

High-level trilateral negotiations – known as “trilogues” – are planned to take place between the European Parliament, the Council of the EU (representing the 27 Member States of the Union) and the European Commission in October, with in order to adopt the Artificial Intelligence Law before the current EU mandate ends in 2024.


#prohibit #discrimination #Artificial #Intelligence #Law

You may also like

Leave a Comment