The EU Artificial Intelligence Law and its impact on intellectual property protection

by time news

2024-02-07 09:54:55

In December 2023, the European Union Parliament and the EU Council reached a provisional agreement on the EU Artificial Intelligence Law. This sets the stage for finalizing and implementing the law, which uses a tiered approach to Artificial Intelligence in which requirements are based on the level of risk a system poses.

The EU Artificial Intelligence Law would be the first wide-ranging regulation on Artificial Intelligence of its kind. Next, we analyze the framework of the EU Artificial Intelligence Law and how it could affect companies.

What is the EU Artificial Intelligence Law?

According to a European Parliament press releasethe EU Artificial Intelligence Law “aims to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected against high-risk Artificial Intelligence, while driving innovation and making Europe a leader in this field”.

It seems like a difficult task. To explain it a little, the EU Artificial Intelligence Law would establish limits on what Artificial Intelligence systems can do based on the level of risk they present. The law would prohibit certain AI activities, such as systems used to exploit people’s vulnerabilities, such as age, disability, or social or economic status. For other levels, Artificial Intelligence systems would be subject to transparency requirements so that people know they are dealing with a machine.

Who is affected by the EU Artificial Intelligence Law?

There are multiple answers to this question.

More generally, the EU Artificial Intelligence Law will affect EU citizens in general. Advocates of the law are touting its safeguards for users. For example, in Parliament’s press release, rapporteur Dragos Tudorache stated: “The EU is the first in the world to establish robust regulation on Artificial Intelligence, guiding its development and evolution in a human-centered direction”.

More specifically, the law will affect Artificial Intelligence systems, including companies that create them, companies that incorporate them into their products and services, etc. Additionally, certain obligations apply to specific industries, such as the banking and insurance sectors and Artificial Intelligence systems used to influence the outcome of elections. However, on a broader level, part of what is notable about the law is its breadth, as it would apply to Artificial Intelligence systems in all industrial sectors.

Thirdly, it is important to analyze the EU Artificial Intelligence Law from the point of view of who must comply with it. Naturally, this includes companies within the EU that provide Artificial Intelligence systems. But the law would also apply to companies outside the EU that place AI systems on the EU market and to suppliers and “implementers” of AI systems outside the EU if the production of the system is used in the EU. . That is to say, the effects of the EU Artificial Intelligence Law will apply far beyond the EU.

What are the different risk levels set out in the EU Artificial Intelligence Law and what do they mean for business?

The structure of the EU Artificial Intelligence Law is based on four levels of risk: unacceptable, high, limited and minimal risk. Each level includes its corresponding responsibilities. The law would cover a variety of types of risks, in areas ranging from the environment to democracy. Companies should consider whether their activities fit into one of these categories to determine next steps:

  • Minimum: It appears that many companies that use or provide Artificial Intelligence systems may have no obligations under the law. According to a Parliament press release, “The vast majority of Artificial Intelligence systems fall into the minimal risk category”. These types of Artificial Intelligence systems include spam filters, recommendation systems, and the like. While low-risk systems get a “free pass,” they may be subject to voluntary measures such as codes of conduct.
  • Limited: the risk of transparency limited or specific seems to be a loose general category for systems such as chatbots that interact with humans, some emotion recognition and biometric categorization systems, and deepfakes. (Note, however, that emotion recognition and biometric categorization systems may also be assigned a higher risk level, depending on use.) Systems in this category would be subject to certain transparency obligations so that people know that they are interacting with a machine and/or that these systems are being used.
  • Alto: High-risk Artificial Intelligence systems are classified “due to its significant potential harm to health, safety, fundamental rights, the environment, democracy and the rule of law”. High-risk systems include systems used as security components or under EU health and safety harmonization legislation, as well as systems used in specific areas: remote biometrics; critical infrastructure; vocational education and testing; employment, worker management and access to self-employment; access to the enjoyment of essential private services and public services and benefits; law enforcement; migration, asylum and border control management; and administration of justice and democratic processes. Systems in the high risk category are subject to a number of requirements, for example, risk mitigation, detailed documentation, human supervision, cybersecurity, etc.
  • Unacceptable risk: Finally, the EU Artificial Intelligence Law would prohibit a number of specific activities of Artificial Intelligence systems (with some police exemptions):

– Biometric categorization using sensitive characteristics such as political, religious, philosophical beliefs, sexual orientation and race.

– Untargeted extraction of facial images from the Internet or subtitled television footage to create facial recognition databases.

– Recognition of emotions in the workplace and in educational institutions.

– Social score based on social behavior or personal characteristics.

– Artificial Intelligence Systems that “They manipulate human behavior to circumvent their free will.”

– Artificial Intelligence that exploits people’s vulnerabilities (due to age, disability, social or economic situation).

In particular, the EU Artificial Intelligence Law will not affect to Artificial Intelligence systems used solely for military or defense purposes, or for research and innovation. It also does not apply to people who use Artificial Intelligence for non-professional reasons.

How will the EU Artificial Intelligence Law reform intellectual property rights?

The use of copyrighted material to train AI has been a high-profile issue lately, with several well-publicized lawsuits by artists and writers against AI companies. The EU Artificial Intelligence Law addresses this issue. General Purpose Artificial Intelligence (“GPAI”) must comply with measures including following EU copyright law and providing detailed summaries about the content it uses for training materials. For copyright owners who have opted out of making their data available for text and data mining, GPAI must also honor the opt-out.

Another transparency requirement that could have implications for intellectual property is the requirement of tag deepfakes.

Of course, any sweeping laws that affect technology will also affect intellectual property rights and development. This can include everything from requirements on disclosures and contracts about systems and processes to testing conditions for high-risk AI.

What are the sanctions under the EU Artificial Intelligence Law?

Penalties for non-compliance can range from €7.5 million or 1.5% of global turnover to €35 million or 7% of global turnover, depending on the violation and the size of the company.

When will the EU Artificial Intelligence Law come into force?

With the recent provisional agreement, the process appears to be nearing completion. Some suspect that the law could be finalized for the June 2024 parliamentary elections.

Two versions of an unofficial copy of the text of the law have been leaked online. The reports on the leaked unofficial copy They state that the law will enter into force on the twentieth day after its publication in the Official Journal of the EU. However, its provisions would not be applicable until some time later. Prohibitions on “unacceptable risk” activities would reportedly apply six months after the law takes effect, but may be up to thirty-six months for the requirements for high-risk AI systems discussed above.

How to take advantage of Artificial Intelligence to protect intellectual property online

The EU Artificial Intelligence Law was first proposed in 2021. Now that we have a clearer idea of ​​what the law will look like when it is finalized, and possibly even a timeline, it is time for companies to consider Artificial Intelligence systems that they use and offer.

Artificial Intelligence and intellectual property are often intertwined, and the increasing prevalence of Artificial Intelligence systems highlights the potential risks for intellectual property owners. Artificial Intelligence can also be a tool to protect intellectual property, with Artificial Intelligence systems available to assist in the detection and enforcement of infringements.

#Artificial #Intelligence #Law #impact #intellectual #property #protection

You may also like

Leave a Comment