AI is going to be regulated in Europe to respect fundamental rights

by time news

2023-05-07 21:34:21

On Thursday, April 27, the European Parliament finally reached a final agreement on the Artificial Intelligence Regulation. It is an important step because the use of artificial intelligence is not without risk.

According to the regulation proposal, different uses of AI can be established depending on the risks involved: prohibited uses, high-risk uses and those that do not fall into any of the two previous categories.

Forbidden to classify honesty

The prohibited uses contemplated by the European regulation of artificial intelligence have to do with:

  • AI that can alter a person’s behavior causing physical or psychological damage in a subliminal way. In this sense, systems based on artificial intelligence that recognize emotions will be prohibited in the fields of education, employment, border control and law enforcement.

  • Those uses that exploit the vulnerabilities of a specific group of individuals, either by age or physical or mental disability in order to distort the behavior of a person.

  • Uses by or on behalf of public authorities for evaluation or classification of honesty of people through scoring systems based on their social behavior or their personal characteristics. Within these assumptions is the prohibition of the use of predictive systems of crime or administrative fraud.

  • The real-time use of biometric identification systems in public spacesexcept in cases of search for specific victims of crimes, the prevention of a specific, imminent and substantial threat to life or physical integrity or a terrorist threat.

  • The arrest, location or persecution of the perpetrator or suspect of a crime. Exceptions will only be made for serious crimes, with prior judicial authorization.

Employment, public services and justice

The new regulation also contemplates uses considered as high risk. Among them:

  • Those systems that are in charge of safety of another product or of themselves.

  • Those intended for biometric identification and categorization of natural persons, management and operation of critical infrastructures, education and professional training

  • Employmentworker management and access to self-employment.

  • Access and enjoyment of essential private services and of the public services and benefits, as well as those used for law enforcement by security forces and bodies. Those related to migration management, asylum and border control are also included.

  • Administration of justice and democratic processes.

In the last parliamentary debate, it was agreed to include the recommendation systems of the large platforms onlinethe systems in charge of key infrastructures when there is a risk of impact on the environment and generative artificial intelligence, such as ChatGPT or Dall-e.

In all these situations there are uses that put the fundamental rights of citizens at risk and measures will be required to prevent them from being violated. Of course, they will be measures aligned with the guides on artificial intelligence of the group of senior experts of the European Commission, who have been thinking about the matter for years.

Assess lifetime risks

The European regulation indicates that the risk assessment must be established, implemented, documented and maintained throughout the life of the intelligent system. In a repetitive way. Risks that may arise from their intended use, but also from potential misapplications, must be assessed. And, of course, implement measures to prevent them.

In addition, it is required that there be a data management plan, in compliance with the regulations of the General Data Protection Regulation. In all AI systems it is important to promote access to technical documentation, and they must be designed so that transparency in their operation allows users to interpret their results. The instructions should also be easy to understand.

On the other hand, the regulation contemplates that systems based on artificial intelligence are designed to allow human supervision of their results, in order to prevent and mitigate risks to health, safety or the fundamental rights of citizens.

The requirements do not end there: they must also be precise (through metric systems), robust (with backup copies and plans in case of failure) and cyber-secure (prepared for attempts to be used by unauthorized third parties).

All these principles were already included in the ethical guidelines of the group of senior experts. In the recent European parliamentary debate it has been agreed to introduce others that were missing such as social and environmental well-being, respect for diversity and non-discrimination and the principle of justice.

Towards a reliable artificial intelligence

The fact that the regulation is based on respect for fundamental rights and the seven guiding principles of the group of senior experts of the European Commission allows for a clear European standard: reliable artificial intelligence.

These requirements will be controlled by national AI agencies and, for cases where they are used in more than one country or have a large transnational impact, by the future European AI Board.

Although it has taken some time to prepare the legislation, it is indisputable that it is a text that will give guarantees to citizens by prioritizing the respect and protection of fundamental rights, having humans as the central axis of its design.

#regulated #Europe #respect #fundamental #rights

You may also like

Leave a Comment