“What teenager is predestined to get pregnant”

by time news

2023-05-27 03:29:47

What will be next? is the question we are asking ourselves in the face of the barrage of news about artificial intelligence that is currently flooding the headlines. However, in 2018 this technology, which breaks schemes, went more unnoticed. At that time, the Ministry of Early Childhood of the province of Salta, in Argentina, and the American giant Microsoft, presented an algorithmic system to predict adolescent pregnancy. A pioneering application in this field, at the state level, which they called «Technological Platform for Social Intervention». However, the media revealed that its implications were much more disturbing.

Juan Manuel Urtubey, whoever it was was the governor of Salta, openly declared on television: «With this technology you can foresee five or six years in advance, with name, surname and address, which girl is 86% predestined to have a teenage pregnancy». However, once the results were obtained, they did not detail what would happen next. And also, the variables on which the operation of this AI was based were not very transparent.

It was Wired who uncovered the case and detailed that the system’s database was built with 200,000 residents of the city of Salta, including 12,000 women and girls between the ages of 10 and 19, accessing personal and sensitive information such as age, ethnicity, country of origin, disability and whether the dwelling had hot water in the bathroom.

They further confirmed that “territorial agents” they made visits to the girls’ homes, where they conducted surveys, took photos, and recorded their locations on GPS. And it is indicated that the idea of ​​this close surveillance was to use it in the poor neighborhoods of the area and also monitor immigrants and indigenous people.

The absence of a regulation of AI in Argentina has prevented a formal and exhaustive review of the AI ​​used. As well as examining its impact on adolescents who were labeled by the system. It was also not clear whether the use of the program was phased out entirely.

And the Applied Artificial Intelligence Laboratory of the University of Buenos Aires it exposed the design errors of the platform and rejected that the percentage of certainty of the predictions was as high as had been declared. And one of the researchers qualified that this type of problem can lead politicians to make wrong decisions.

Another case of artificial intelligence used at the state level was that of the dutch government had to resign entirely in February 2021, due to an AI system glitch that erroneously detected childcare subsidy fraud accusing 26,000 families. This snowballed and caused the government to resign. Therefore, there are questions that are important to ask before deciding to use a system that can be wrong.

The AI ​​system used in Argentina was promoted as ‘futurist’. The experts who analyzed this case came to mention that behind it there is a persistent eugenic impulse managed by a few.

extreme version

China goes even further in the use of technology, with genetic surveillance, to determine which citizens will be predisposed to suffer from a specific disease. And with the program ‘medical examinations for all’ International Amnesty and others denounced that blood samples, face scans and voice recordings were forcibly collected in Xinjiang, behind the use of artificial intelligence to make a genetic map and monitor the population.

Under Scrutiny

Regarding the implications of news such as those in Argentina, raphael lovedirector of the Chair of Bioethics at the Comillas Pontifical University, points out that the most obvious is the lack of respect for privacy of people derived from the bioethical principle of autonomy.

The United States, and especially the European Union plans are being developed to audit algorithmic systems. And Amo specifies many attempts to make artificial intelligence legislation that is robust and reliable, it involves controlling the data. Because artificial intelligence feeds on them and they have become the liquid gold of this moment. And the first thing any AI regulation must do is take care of protecting privacy.

For this reason, Amo emphasizes that “by autonomy we have the right to the confidentiality of our data, especially health data. And violating this issue means breaking that agreement. And if this also occurs in the case of minors and with the most vulnerable people, that would ultimately mean control of the weak».

#teenager #predestined #pregnant

You may also like

Leave a Comment