This is what AI law considers unacceptable risk

by time news

2023-12-22 02:06:17

The first agreement in the world for a law that regulates artificial intelligence has been reached in Europe. As it is a pioneer, it could have influence on other international regulations, as has happened with the General Data Protection Regulation. That’s why it’s in the spotlight. The bill has been developed around the risks it may create. At the highest level are those considered “unacceptable risks”.

Although we will have to wait for the final text, for now the agreement differentiates three types of risk: unacceptable riskwhen it considers that its use seriously compromises the rights of the person; high risk, if it affects fundamental rights but its use can be understood in certain contexts; and slight risk.

It is prohibited to classify by beliefs, race or sexual identity

What would you think if, in the process of schoice for a jobWill you be ruled out as a candidate because an AI system has identified your sexual tendency and your potential boss dislikes that? They could do it without even calling you for the interview, without knowing you.

This is one of the so-called “unacceptable risks” derived from the use of artificial intelligence. Is based on the biometric categorizationwhich has the potential to detect political ideology, religious or philosophical beliefs and also race and sexual orientation.

He facial recognition, used in security systems, both public and private, is fed with a large number of images. These can be collected from the Internet or from the huge number of video surveillance cameras that exist in public places. This is the case of the application ClearviewAIand software facial recognition technology that supplies its product to governments and police agencies for the detection of people. Well then, This form of automated image collection will be prohibited.

Because? Mainly because of the intrusion into privacybut also because they have a large margin of error when identifying the person. If it is admitted in trials, for example, it would make no sense to use a system that fails to identify subjects in processes that can lead to criminal sanctions. Additionally, this system allows people to be controlled in public spaces.

Emotion recognition systems are also prohibited that control individuals all the time in the field educational or work to avoid, among other things, let them sleep, lose concentration or show apathy. Here too the law speaks of “unacceptable risk”.

Surveillance at work

When they have been implemented in work environments, the workers have described experience extreme discomfort by feeling watched all the time. But the problem goes beyond discomfort: companies could impose measures when they detect these emotions to increase worker productivity if they do not perform as expected. It’s more, a bad night It may be enough for a system of these characteristics to warn of our “low productivity.”

It can certainly make sense to use facial recognition and emotion monitoring with a clearly positive purpose for societyhow to prevent a person from falling asleep driving a car and causing an accident. What is controversial is that these technologies be allowed to be used for purposes such as immigration control or as a justification for police action.

Controlled behavior

Not long ago, the dutch government decided to use an artificial intelligence system to organize a distribution of social aid among the most disadvantaged. What was the problem? That discriminated against immigrants and black people, two vulnerable groups that were excluded from obtaining this aid. The consequences were so serious that the minister who implemented the system had to resign after a while.

In China They have gone even further. The government has established a point system to evaluate its citizens. Obtaining a low evaluation can lead to travel restrictions, or prevent access to bank credit or employment.

To avoid this type of situation, The new regulation proposes to abolish uses dedicated to exploiting people’s vulnerabilities such as age, disability or social and economic situation.

Other model not allowed They are intelligence systems that manipulate human behavior to constrain their free will. They are systems that are used, for example, to make users stay hooked on social networks for longer, to encourage the purchase of certain products or to prevent people from making certain decisions freely.

Police uses are not prohibited, but are considered high risk

However, the law does not prohibit security forces and bodies from using biometric identification systems in spaces open to the public for police purposes. In these cases, judicial authorization is required to use them. And they can only be used by a limited time and space.

These assumptions are the following:

Search for specific victims, for example, in kidnappings, trafficking and sexual exploitation.

Prevention of specific and current threats of terrorism.

Location or identification of a person who has committed a serious crime of terrorism, trafficking and sexual exploitation, murder, kidnapping, rape, armed robbery, participation in an organized group and environmental crimes.

The skepticism of certain groups regarding these applications is understandable considering the large number of false positives that occur in live facial identification. If we add to that the number of crimes included in the third point, it almost seems that any use in police investigations is given the green light in general.

The effort that States have made to prevent AI from being banned in police investigations has been more than notable. Perhaps for these uses in criminal matters, specific legislation should be articulated, as is the case with the use of personal data.

Be that as it may, we can consider the agreement a great success, given that fundamental rights remain mandatory. And citizens, regardless of whether its use is prohibited or not, They may resort to the courts when they feel unprotected.

#law #considers #unacceptable #risk

You may also like

Leave a Comment