ChatGPT should not be classified as high risk

by time news

2023-04-28 22:32:37

EThe success and criticism of the artificial intelligence ChatGPT have alarmed politicians. The Italian data protection authority had the chatbot blocked at the end of March. In an open letter, scientists and IT celebrities called for the training of very powerful models to be suspended. A dozen key MEPs called for action in an open letter.

With the AI ​​law of the EU Commission of 2021 there is even a suitable lever. It is now largely clear how the EU Parliament intends to use it to regulate generative AI systems – i.e. systems that appear to produce creative texts, images, videos or computer programs.

Negotiators from the various factions have agreed on a fundamental compromise on the AI ​​law, which now also includes rules for generative AI systems. The central point: they are not classified as high-risk technology per se, as initially considered. According to the AI ​​law, high-risk applications are subject to strict conditions. The data from and with which these AI applications learn must be selected in such a way that nobody is disadvantaged. Also, one person must always have the ultimate control.

Making this the rule for generative systems went too far for the deputies. “Bans or strong over-regulation of AIs like ChatGPT would only lead to innovations being created outside the EU,” says Svenja Hahn (FDP). “The solution that we have agreed on gives ChatGPT and other applications scope to develop in the EU,” emphasizes Axel Voss (CDU), who is responsible for the AI ​​law.

According to the compromise, the requirements that generative systems have to meet are determined by what they are used for. The same rules apply to them as to other self-learning systems. If ChatGPT is used in search engines, for example, it is not a high-risk application. When it comes to insurance or the selection of applicants, yes.

Use of biometric data?

The classification of AI as “high risk”, in turn, depends, among other things, on whether an application poses a direct or indirect danger to life, whether it can discriminate against people, but also whether an AI can endanger the rule of law or democracy. Regardless, the compromise provides for a number of conditions for generative systems like ChatGPT.

In the new Article 28b, their developers are obliged to check the systems in advance for the risks they pose and to implement remedial measures for imminent dangers. The ChatGPT developer Open AI did not do that. The developers must also ensure that the AI ​​systems are secure, for example against cyber attacks, and document which data they use for training.

The question of copyright is not regulated. AI can only write texts and otherwise be “artistic” when trained on the works of authors and artists. But they can fall under copyright protection. Article 28b only stipulates that the producers must roughly state the extent to which they have used protected material for training.

Voss still wants to address this issue in the forthcoming negotiations with the Council of Ministers. So far, he has not even considered whether and how generative systems should be regulated by the AI ​​Act. The Council of Ministers was simply too quick for that. He set his position on the 2022 law before ChatGPT made any waves. The AI ​​law can only come into force when both EU institutions agree on a common line.

Both sides fundamentally support the Commission’s approach of only controlling risky AI applications. This should strengthen trust in AI and make it easier to use non-risky applications. According to the Commission, that is 90 percent.

Controversy threatens the question of whether the use of biometric data, the recognition of emotions and “predictive police work” (predictive policing) should be prohibited. Parliament is demanding this, but it is likely to encounter resistance from the states. First, however, the plenum must accept the compromise in June.

#ChatGPT #classified #high #risk

You may also like

Leave a Comment