Regulating artificial intelligence while protecting scientific research

by time news

2023-05-03 15:00:09

Lhe European Commission has taken a position, with the publication, in April 2021, of a proposal for legislation on artificial intelligence (AI) and the adoption, in December 2022, of a text of general orientation which bases the regulation of the use of AI systems on a risk-benefit analysis. Software deemed high risk is thus subject to numerous certification, documentation, data management, quality and security obligations. This concerns, for example, systems to be used to determine admission to an educational program or eligibility for social benefits, to replace a polygraph or to manage critical infrastructure such as the supply of water or electricity.

The development of systems exploiting the vulnerabilities of certain people, aiming to manipulate them or assigning them a “social rating” is prohibited, as is remote biometric identification in real time in public space, except for specific exceptions such as prevention of terrorist attacks, called according to the online newspaper

Context to disappear from the version to be voted on in May. While some associations are proposing to go even further, it will be interesting to see how the use of automatic video surveillance for the Olympic Games, adopted on March 23 by the National Assembly, which fortunately does not provide for the use of facial recognition, will comply with the European regulation which will finally be voted on.

Read also: Olympic Games 2024: MEPs authorize algorithmic video surveillance before, during and after the Games

An important difference between the 2021 proposal and the 2022 text is the explicit exclusion of scientific research (phew!) and activities relating to the armed forces from its scope. A decision that might have been more controversial before the Ukrainian crisis. Another major difference is the introduction of specific provisions regarding general-purpose AI systems as they can be used as, or integrated with, high-risk systems.

An alarmist perception

Even if their definition is much broader, since it includes, for example, image and speech recognition, these provisions probably foreshadow a general framework for the technologies grouped around the now familiar notion of generative AI. In this context, the open letter in favor of a “pause in giant AI experiments” published recently in response to the very (too?) publicized publication of ChatGPT and GPT-4 by OpenAI may cause concern.

You have 32.17% of this article left to read. The following is for subscribers only.

#Regulating #artificial #intelligence #protecting #scientific #research

You may also like

Leave a Comment