European Parliament Adopts Artificial Intelligence Act

You are currently viewing European Parliament Adopts Artificial Intelligence Act
  • Reading time:4 mins read
  • Post category:Ogletree Deakins

Quick Hits

  • The AI Act’s risk-based approach subjects AI applications to four different levels of restrictions and requirements, including “unacceptable risk,” which are banned; “high risk”; “limited risk”; and “minimal risk.”
  • The AI Act treats the use of AI in the workplace as potentially high-risk.
  • The AI Act is expected to be published soon and go into effect in spring or early summer this year.

While the AI Act does not exclusively regulate employers, it treats the use of AI in the workplace as potentially high-risk, and specifically requires employers to:

  • notify employees and workers’ representatives before implementing “high-risk AI systems,” such as systems that are used for recruiting or other employment-related decision-making purposes;
  • follow “instructions of use” provided by the producers of high-risk AI systems;
  • implement “human oversight” by individuals “who have the necessary competence, training and authority, as well as the necessary support”; and
  • retain records of the AI output, and maintain compliance with other data privacy obligations.

A Risk-Based Approach

  1. Unacceptable Risk applications are banned. They include:
  • the scraping of faces from the internet or security footage to create facial recognition databases;
  • emotion recognition in the workplace and educational institutions;
  • cognitive behavioral manipulation;
  • biometric categorization to infer sensitive data, such as sexual orientation or religious beliefs; and
  • certain cases of predictive policing for individuals.

2. High Risk applications, including the use of AI in employment applications and other aspects of the workplace, are subject to a variety of requirements.

3. Limited Risk applications, such as chatbots, must adhere to transparency obligations.

4. Minimal Risk applications, such as games and spam filters, can be developed and used without restriction.

Hefty Penalties for Violations

Using prohibited AI practices can result in hefty penalties, with fines of up to €35 million, or 7 percent of worldwide annual turnover for the preceding financial year—whichever is higher. Similarly, failure to comply with the AI Act’s data governance and transparency requirements can lead to fines up to €15 million, or 3 percent of worldwide turnover for the preceding financial year. Violation of the AI Act’s other requirements can result in fines of up to €7.5 million or 1 percent of worldwide turnover for the preceding financial year.

The AI Act is expected to be published and go into effect in later spring or early summer of 2024. In the meantime, employers can expect other countries to quickly follow suit with legislation modeled on the AI Act.

Ogletree Deakins’ Cross-Border Practice Group, Cybersecurity and Privacy Practice Group, and Technology Practice Group will continue to monitor developments and will provide updates on the Cross-Border, Cybersecurity and Privacy, and Technology blogs as additional information becomes available.

Follow and Subscribe

LinkedIn | Instagram | Webinars | Podcasts

Ogletree Deakins has experienced professionals in all areas of labour and employment law who provide efficient, client-focused service. We represent employers of all industries and sizes, from small businesses to Fortune 50 companies.”


You can also contribute and send us your Article.


Interested in more? Learn below.