This is what you should know about AI Act – 2024-03-14 05:17:31

by times news cr

2024-03-14 05:17:31

The EU Parliament has given the green light to a law on artificial intelligence. But what does the regulation actually mean for consumers? An overview.

There has been a lot of discussion in recent months about the EU regulations on artificial intelligence – the AI ​​Act: some consider the rules to be too strict, others too lax.

Companies fear that the regulation will slow down innovation. Consumer advocates, however, criticize that the risks of some applications, such as shopping assistants or voice programs like ChatGPT, are not taken seriously enough.

Some people wonder why the EU wants to regulate the use of artificial intelligence (AI) at all. And if the technology is monitored, what exactly does the European Union want to regulate? Here are the answers.

Why does the EU want to regulate the use of AI?

First: The AI ​​Act is a product safety regulation. This means that the European Union wants to ensure that the “artificial intelligence” product meets EU-wide safety standards and does not pose a danger to users.

From the EU perspective, most applications such as digital voice assistants pose little or no risk.

But things are different with artificial intelligence in medicine or security systems. According to the EU, these include, for example, biometric identification systems that recognize fingerprints or faces of people.

AI-controlled facial and emotion recognition, as is already used in the financial and insurance sectors, can also be problematic.

For example, some AI systems are used to make decisions that influence the personal interests of private individuals. These include programs in the areas of recruitment and education.

The EU is afraid that such applications endanger fundamental democratic rights and the security of private individuals. That’s why it wants to regulate the AI ​​systems and monitor their development.

How does the EU want to monitor AI systems in the future?

The EU divides AI systems into four categories in its AI Act.

  • Minimal or no risk: Programs with minimal risk should be able to be developed and used without additional legal obligations. These include, among other things, spam filters in email programs and AI-supported video games.
  • Limited risk: The EU classifies the risk of some AI systems as limited. This is the case, for example, when using chatbots like ChatGPT. According to the EU, users should be aware that they are interacting with a machine and should be informed accordingly.
  • High risk: This should include AI systems that, according to the EU, “have a negative impact on people’s security or their fundamental rights.” These could be applications for autonomous vehicles or medical devices.
  • Unacceptable risk: A small number of systems are said to fall into this category. According to the EU, these are applications that violate its values ​​because they violate fundamental rights.

These include programs that are used to build databases for facial recognition. Emotion recognition in the workplace by such AI systems also poses an unacceptable risk for the EU.

This also includes programs that could be used for so-called social scoring. With social scoring, people are rated based on their origins and their behavior.

Applications that assign people a sexual orientation or political opinion based on their biometric data are also not acceptable to the EU.

How does the EU want to identify the risks of applications?

The European Union wants to develop a methodology that can identify high-risk AI systems. It is not known what this detection should look like. In future, providers of such technologies will be required to provide information about how their systems work.

The EU also wants to attach to the regulation a list of use cases that it considers to be high risk. They say they want to constantly update this list.

What obligations apply to providers of high-risk AI systems?

Before an AI application comes onto the market, providers should have their software assessed by the EU. After the evaluation, the developers could prove that “their system meets the mandatory requirements for trustworthy AI,” writes the EU.

Providers of high-risk AI systems should also build quality and risk management systems into their applications. The aim is to ensure that the risks for users are kept as low as possible.

How can consumers protect themselves?

The AI ​​Act requires so-called product liability policies to ensure that users who have been injured or financially harmed by a defective AI product can receive compensation.

There should also be the possibility of submitting a complaint to a national authority in the event of rule violations. These institutions should be able to initiate appropriate monitoring procedures.

What happens next with the law?

After the vote in the EU Parliament, the Council of Europe must formally approve the AI ​​regulation in April. The regulation generally comes into force 24 months later. However, some rules can also be applied earlier. The bans should take effect after just 6 months, the regulations on AI models with general purpose after 12 months.

You may also like

Leave a Comment