EU AI Act – what does it mean?

In March 2024 the EU Parliament adopted the AI Act. It has taken three years of debate to get this far since the EU Commission first proposed it. The final hurdle before it becomes EU law is for the Council of Ministers to adopt and that is expected soon.

It has engendered much debate and reaching the final text has not been straightforward. The EU Commission’s hope is that, as with its law on data protection – GDPR – this new law will become the global standard for artificial intelligence systems. As AI providers will need to adapt their systems to follow EU law, this might lead to that system being used globally. This is likely to be simpler than having one version for the EU and a different version elsewhere.

Let’s take a look at the EU AI Act (text here – PDF) and assess how it will affect post-Brexit Britain.

TLDR, EU AI Act

  • Defines AI System
  • Adopts a risk-based approach, prohibiting some systems and introducing safeguards for others
  • EU AI Office to have oversight
  • Fines for non-compliance ranging from €7.5-35 million or 1-7% turnover
  • Staggered rollout timetable
  • Existing AI systems must become compliant within 2-4 years
  • AI systems that operate in the UK will likely be adapted to comply with the EU AI Act

Controversial definition

The Act introduces a definition of “artificial intelligence system”. Defining a rapidly-changing technology such as AI was never going to be easy and it has attracted criticism. Some say it is too broad and are concerned it might capture even simple software applications like spreadsheets with basic calculations. Others say it is not broad enough. Or that it could quickly become outdated and not useful. The accusation of law not keeping up with technology is not new, but you have to start somewhere.

The final version of the definition now closely mirrors the OECD definition here (PDF). The EU AI Act defines an AI system as:

a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

Prohibited AI practices

The EU AI Act adopts a risk-based approach. It prohibits some systems and designates others as high risk and allows for lower-risk activities. Prohibited practices include:

  • deploying subliminal techniques or exploiting vulnerabilities due to a person’s age, physical or mental disability to distort behaviour
  • social scoring by public authorities which could lead to detrimental or unfavourable treatment
  • using real-time remote biometric ID systems in publicly accessible spaces for law enforcement except for specific situations. The use of automatic facial recognition systems has already caused controversy recently under GDPR.

High-risk AI systems

The Act also lists high-risk systems and introduces new compliance measures. These systems include:

  • AI systems used in products covered by the EU’s product safety legislation including aviation, cars, medical devices and lifts.
  • Biometric identification and categorisation of people, and emotion recognition
  • Management and operation of critical infrastructure such as road traffic and the supply of water, gas, heating and electricity.
  • Determining access or assigning people to educational and vocational training institutions and assessing tests
  • Evaluating and recruiting people and making decisions on their employment such as promotions or termination of employment or monitoring and evaluating performance and behaviour
  • Access to and enjoyment of essential private services and public services and benefits including assigning emergency call outs or pricing insurance
  • Use of AI systems by law enforcement including assessing the risks of someone offending or use as polygraphs or to detect deep fakes
  • Migration, asylum and border control management
  • Administration of justice and democratic processes

Requirements for high-risk systems

Providers of high-risk AI must jump through some hoops. These include:

  • Establishing a risk management system
  • Conducting data governance to validate the data used
  • Keeping records
  • Drawing up technical documentation
  • Allowing a degree of human oversight
  • Ensuring the system is resilient against manipulation
  • Providing instructions to allow users to remain compliant

Providers who believe their AI system is not high-risk must document this assessment before marketing and selling it.

General purpose AI

The Act also addresses general purpose AI. General purpose AI means an AI model:

when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks“.

All providers of general purpose AI models must draw up technical documentation, provide information to those who intend to integrate that system into their own AI system, respect copyright and publish a sufficiently detailed summary about the content used for training the general purpose AI model. There are extra provisions for larger general purpose AI models when the cumulative amount of compute used for its training is greater than 1025 floating point operations per second.

Enforcement & penalties

The Act introduces the right to lodge a complaint and a right to an explanation of individual decision-making.

Failure to comply with the Act can result in a fine although not within the first year after entry into force of the Act.

The supply of incorrect, incomplete or misleading information when required to do so can lead to a fine of €7.5 million or 1% of turnover.

The fine for providers of general purpose AI models is 3% or €15 million (again whichever is the higher). This includes infringements of the obligations or non-compliance with the enforcement measures, e.g. requests for information.

The penalty for providing a prohibited AI practices could lead to a fine of the higher of €35 million or 7% of annual turnover. This is higher than a fine for breach of GDPR and is more akin to an anti-competition fine.

What next?

The agreed text is expected to be finally adopted in April 2024. After entry into force, the AI Act will apply as follows:

  • 6 months for prohibited AI systems
  • 12 months for general purpose AI
  • 24 months for general high risk AI systems transparency requirements
  • 36 months for high risk AI systems on product safety

Codes of practice must be ready 9 months after entry into force.

If you’re a provider of one of the AI models above inside the EU, you need to ensure they are compliant within the relevant period below. If you are using such a model, you should ensure your provider will become compliant. Providers of existing AI systems will have a window of grace to adapt them to make sure they are compliant. For high-risk systems, this is 4 years. For general purpose AI, this is 2 years.

Providers and users in the UK will not be directly impacted by this. Having said that, it is likely that AI systems that are used in the UK and the EU will be adapted to meet EU standards to reduce the likelihood of non-compliance. The UK will catch up at some point.

If you need advice, contact me f.jennings@teacherstern.com or +44 (0) 20 7611 2338.

Image designed by Freepik

One comment

  1. […] EU. The EU has decided to take a prescriptive approach with its draft AI Act. This Act has a (hotly debated) definition of AI. It introduces a risk-based approach, prohibiting some AI practices and designating others high-risk. It also provides guidance on governance, supervisory authorities and the creation of codes of conduct. The EU Commission first proposed the EU AI Act in April 2021. The European Parliament formally adopted it in March 2024. The final hurdle before it becomes EU law is for the Council of Ministers to adopt and that is expected soon. The EU Act will become effective in stages with some measures entering force before others. More on this in my separate post here. […]

    Like

What's your view? Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.