These opaque algorithms decide if you receive help for electricity or if your complaint is false

by time news

R. Alonso

Madrid

Updated:

Save

The Spanish Administration is increasingly digitized and has more tools capable of speeding up procedures that, otherwise, would take too long. Among them, we find different algorithms that, thanks to the use of artificial intelligence, are capable of making decisions that, in some cases, can have an important weight in a person’s life. Among other things, these machines can decide if you access the social bonus for electricity payment. They are also able to determine if the complaint you are trying to file is false.

«They are being used to support decision-making or, directly, to make them. There are many types and they can be decisive for you to get a loan or a job », he explains in conversation with ABC Gemma Galdónexecutive director and founder of Éticas, a foundation dedicated to auditing algorithms and raising awareness of the need to monitor and demand transparency in the use of automated decision-making systems.

The problem, as explained by Ethics, is that these algorithms are not audited, so we cannot know how they really work. The Government announced a year ago the creation of a public observatory called Obisal that would deal with it, although, to date, there has been no more news about it. “It is curious how the State refuses to review these algorithms, there is no specific reason for it,” says Galdón.

For users to better understand these tools that, as we say, can make decisive decisions for their futureA few months ago, Ethics launched the Observatory of Algorithms with Social Impact (OASI), in which information is shared on more than 50 algorithms of this type, more than 15 of them are being used in our country by companies and public institutions.

“Not everyone makes very complex decisions or handles a lot of information. Many are nothing more than glorified Excel documents simplifying decisions that are actually very complex. Here we find a very serious problem”, says Galdón.

To detect repeat offenders

As we say, the Spanish public administration uses various algorithms to speed up procedures and make decisions. Among them is VioGén, which has been detecting the risk of recidivism of aggressors in cases of gender violence since 2007. It is known how this algorithm was designed and how it works. However, the algorithm code remains undisclosed. The weight that the different questions of the protocol have for the algorithm to generate a risk score is also not known.

After several refusals by the Spanish Government, to which Ethics offered external audits of the tool, the foundation decided in 2021 to carry out an external test. One of the most worrying findings made during development is that, despite the fact that the VioGén system’s risk assessment has been designed as a recommendation system and the results can be modified, the Police maintain the automatic result given in 95 % of the cases. “The protection measures granted to the victim depend on that result. This means that making this decision is left in the hands of an algorithm,” according to the organization led by Galdón.

Repeat offenders, terrorists and whistleblowers

The State Security Forces and Bodies use other tools for support. Also at the regional level, as is the case of Exchange, algorithm used in Catalonia since 2009 that calculates the probability that a prisoner who is released will reoffend. Veripol, meanwhile, is an algorithm used nationwide that is responsible for detecting false reports. It uses natural language processing and machine learning to analyze a call and predict the probability that it does not correspond to reality.

In addition, we employ another algorithm called Tensor. This tool, created at European level, is responsible for monitoring the development of possible terrorist activities on the Internet. The aim of this project was to create a digital platform equipped with algorithmic systems so that security forces throughout Europe could detect terrorist activities, radicalization and online recruitment as soon as possible.

“The reality is that as these are algorithmic systems used by law enforcement for security and terrorism prevention, there is very little public information available about the platform, if and by whom it is being used, and for what purposes.” », they point out from Ethics.

On the other hand, we find the facial recognition, technology to which experts have repeatedly drawn attention in recent years. «He has added problems. It performs very poorly in uncontrolled scenarios. The ability of the AI ​​to identify us on the street, with sunglasses, when we are on the move, is very low. The success rates in the laboratory we find error rates higher than 30% under ideal conditions. When they are not, they reach 60%. It is not a safe or reliable technology”, explains Galdón.

Social security, electricity bonus and training

Social services also use algorithms. One of the best known, and controversial, is Bosco, a tool used by electricity companies to regulate the social bond for the payment of the electricity bill. In 2019, the Civio foundation, it says, discovered that the tool was flawed and denied help to people who were entitled to receive it. Since then, he has waged a legal battle with the aim of releasing the source code so that the problem can be studied and resolved. “They are denied access, and it is curious, because they are actually doing them a favor by identifying that the system is not working well,” explains Galdón.

Social Security, for its part, uses a chatbot called Issa. From Ethics they point out that it is possible that the chatbot is offering the same answers to different users depending on how they express themselves. For this reason, there is the possibility of discrimination against people whose mother tongue is not Spanish, “since they may receive worse service or erroneous answers from the chatbot,” they point out from the foundation.

Send@, SEPE algorithm that serves as a tool to support the orientation and labor transitions of job seekers. “In this case there is a risk of discrimination since historically some groups of people have been discriminated against when looking for a job, such as women and social minorities, the algorithm runs the risk of reinforcing this type of discrimination, and it may also not take into account the differences and diversity among job seekers”, they say from Ethics.

See them
comments

You may also like

Leave a Comment