The EU AI Act

A Step-by-Step Guide to the EU AI Act & How to Avoid Fines

 Understanding the EU AI Act

Does your business use chatbots or an AI assistant? Is AI involved in your customer service or your company’s social media? Are you a tech start-up company, healthcare provider, or education provider? AI is now being used in transportation, retail, cybersecurity, decision-making, finance, learning…the list goes on.

Unless you’ve been living under a rock, or are completely cut off from society, you will be aware of the growing use of AI throughout the world.

It seems that AI, or Artificial Intelligence, is the ‘next big thing’ and seems set to play an important role in our lives, with multiple applications in a vast array of industries.

In fact, it is already being widely used, although the rules governing its use are still in their infancy.

You quite naturally may have some questions, such as:

  • What is the EU AI Act definition of AI?
  • When will the EU AI Act come into force?
  • Does the EU AI bill apply to the UK?
  • How can I ensure I am compliant regarding my use of AI?
  • What EU AI Act training is available?

We will cover these questions, and more, to supply you with the vital information you need on this interesting subject.

Does the EU AI Act Apply to the UK?

In short – yes – it applies extraterritorially and is applicable to any UK businesses which have operations within the EU. If your UK business interacts with the EU market, i.e. you have customers in the EU, you develop/deploy AI systems within the EU, etc. then you have to comply with the Act.  

The Act is not sector-specific and applies to all sectors and a wide range of AI systems. Strict penalties will apply for non-compliance. Certain exemptions include non-professional / personal usage.

Most of its provisions will come into effect in August 2026.

What Is the EU AI Act Definition of AI?

The Act defines artificial intelligence in the following terms (brief summary):

  • “AI system”

A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment.

  • “General-purpose AI model”

An AI model, trained with a large amount of data and using self-supervision at scale, that is capable of performing a wide range of tasks, regardless of how the model is placed on the market.

  • “General-purpose AI system”

An AI system, based on a ‘general-purpose AI model’ which is capable of serving various purposes, including direct use and integration in other AI systems.

An AI system is thus defined as using machine learning to process data, with degrees of autonomy, to generate content, recommendations, decisions, predictions, etc. 

An EU AI Act Overview

The EU AI bill provides a framework with the intention of legally regulating the use of AI (artificial intelligence) within the European Union (EU). This includes the development and marketing of AI systems within the EU. 

The primary concern is that AI is developed and used in ways that take into account any risks to the safety and rights of data subjects. 

This is the first attempt by a governing body to create comprehensive regulations around the usage of artificial intelligence, in order to ensure that AI is used in safe and trustworthy ways.

Some key points:

  • Risk assessment

A risk-based approach is taken, with various levels of risk assessed, and different rules applied, based on the risks involved. Some AI systems will have stricter rules if they are considered to be ‘high risk’.

  • Unacceptable risk

Some applications of AI will be completely prohibited if they are considered to have an unacceptable level of risk. Certain usage of AI will be banned, for example, if it is deemed manipulative or dangerous to people in some way, or if it infringes on a person’s privacy or rights.

  • Transparency

Users must be made aware when they are using or interacting with an AI system, particularly when important decisions are being made. 

  • Compliance

Risk assessment and safety measures must be undertaken to ensure that regulations are being followed, especially in cases of usage considered to be high risk.

  • Compliance enforcement

There will be considerable fines, as well as other penalties, for cases of non-compliance. Data-governance requirements must be adhered to, particularly in applications that are ‘high-risk’. Member states will have the responsibility of ensuring the AI Act is followed.  

The main aim of the EU Artificial Intelligence Act is to allow the responsible development of AI systems, and at the same time protect citizens from any potential harmful effects of artificial intelligence.

By setting clear rules around the usage and development of AI systems, innovation can be cultivated while the fundamental rights of people are protected.

EU AI Act Risk Categories

The European Artificial Intelligence Act defines four general categories of risk, each with different requirements:

  • Unacceptable risk

Certain AI systems are prohibited if they are regarded as presenting an unacceptably high level of risk. Some examples include AI systems that are exploitative or utilise manipulative methods to negatively affect a person’s behaviour and cause them, or others, harm. 

Other EU AI Act prohibited practices include; social scoring based on personal data, systems which promote, or could lead to, discrimination, facial recognition software used to infer an individual’s political views, sexual orientation, or other sensitive, private information. 

Also included would be autonomous AI-powered weapons, that don’t have human oversight, which could target individuals. In short, any AI systems which could endanger individuals or violate their fundamental rights would be classified as ‘unacceptable risk’.

  • High risk

The most detailed compliance rules cover AI systems that are considered ‘high risk’. These AI systems could potentially have adverse consequences, or be harmful to individuals, if used incorrectly.

This would apply especially to AI systems used in making medical assessments or in making hiring decisions, AI used in administration of justice, law enforcement or surveillance, for example facial recognition software or biometric identification systems used in public spaces, AI used in credit-scoring or assessing insurance premiums, and AI systems which control self-driving vehicles. 

Because of the inherent high risk of harm, discrimination, miscarriage of justice, or violation of an individual’s rights, developers of ‘high risk’ AI systems are required to undergo risk assessment, obtain certification, and implement appropriate strategies and safeguards to mitigate risk.

  • Limited risk

AI systems considered to carry ‘limited risk’ include chatbots, and biometric or emotion recognition AI systems that interact with people. Other AI systems considered ‘limited risk’ are those which create ‘deep fakes’ such as online content that seems genuine although it is AI-generated. Such systems are required to indicate that the content has been created or manipulated artificially. 

It’s worth noting that some exceptions could include legally permitted usage of AI to prevent or detect criminal behaviour. 

Also, creative or artistic works which are ‘evidently’ fictional or satirical may be disclosed in a way that doesn’t impede the artistic works.

  • Low risk

AI systems not considered to be ‘unacceptable’ ‘high’ or ‘limited’ risk are classified as ‘low’ or minimal risk.

As we can see, the EU AI Act risk assessment has the intention of classifying AI systems according to the potential level of harm it could cause. This includes manipulation of vulnerable individuals such as children, as well as discrimination or privacy concerns. 

Furthermore, the European Commission has the power to classify certain AI systems as having systemic risk. A distinction is made between AI models which carry a systemic risk and those which don’t. 

If the computational power of an AI system exceeds a certain threshold, it may be presumed to present a systemic risk.

Compliance Roles

There are various compliance roles, each with their own particular obligations, according to the EU AI Act. 

  • Provider

A developer of an AI system or AI model, or a person, agency, or other body that develops and places an AI system or AI model into service on the EU market.

  • Importer

A person established or located in the EU who places an AI system on the market that carries the trademark or name of a person established in a third country.

  • Distributor

A person in the supply chain who makes an AI system available on the market, but is not classed as a ‘provider’ or ‘importer’. 

  • Deployer

A person or agency that uses an AI system (except where the usage is non-professional).

  • Operator

A provider, importer, distributor, deployer or representative of an AI system.

What Is the Territorial and Sectoral Scope of the EU AI Act?

The Act applies to all sectors and is applicable to providers and users of AI systems used in the EU, whether they are located or established in the EU or are in a third country.

The AI Liability Directive applies to non-contractual fault-based civil law claims in the EU.

It is worth noting that the Artificial Intelligence Liability Directive (AILD) does not directly govern the risks posed by AI systems. The AILD was proposed by the European Commission in 2022 and complements the EU AI Act.

The purpose of the AI Liability Directive is to create regulations in order to hold people in the supply chain liable for any harm caused by AI systems, and make it easier for victims to prove harm caused to them. 

The directive introduces a “presumption of causality” and provides the court system the power to order the disclosure of evidence for ‘high risk’ artificial intelligence systems.

EU AI Act Fines

Non-compliance penalties are considerable, underscoring the importance of adhering to EU AI Act principles and safe and responsible use of AI systems, in line with the Act.

Some examples of non-compliance include:

  • Providing or using AI systems which are banned by the Act as they are considered ‘unacceptable risk’.
  • Failing to register or mark products correctly.
  • Use of AI systems which could cause harm to individuals, or violate their privacy rights.

Failing to comply with rules related to prohibited / unacceptable risk practices could see you incurring a hefty fine of up to €35,000,000 or 7% of your annual worldwide turnover.

If you provide incorrect or misleading information you can receive a fine of up to €7.5 million or 1% of your annual turnover. 

Additionally, it’s important to note that if your business is investigated (and possibly penalised) under the EU AI Act, then it is quite likely that you will also become subject to an investigation under the GDPR laws and could find yourself subject to two separate investigations (and potential fines). 

When Will the EU AI Act Come Into Force?

The Act came into force on August 1st 2024, however the EU AI Act timeline has a transition period of several years, with many provisions not fully applying until August 2nd 2026.  

Some key dates in the EU AI Act implementation timeline include:

  • 21st April 2021

The AI Act is proposed by the European Commission 

  • December 2023

The European Parliament and the Council agree on the Act 

  • 12th July 2024

The Official Journal of the European Union publishes the AI Act as a formal notification of the new law.

  • 1st August 2024

The AI Act enters into force.

  • 2nd November 2024

Member state deadline for identifying and listing the governing bodies responsible for the protection of fundamental rights.

  • 2nd February 2025

Prohibitions on unacceptable risk Al systems start to apply.

  • 2nd May 2025

Finalisation of codes of practice for GPAI (General Purpose AI) 

  • 2nd August 2025

Requirements, obligations and penalties go into effect for providers of GPAI. Authorities must be appointed to oversee the Al Act 

  • 2nd August 2026

Most of the AI Act provisions begin to apply.

Other notable dates are:

  • 1st August 2027

Third party conformity obligations become applicable for EU AI Act high risk use cases including for such products as toys, agricultural vehicles, medical equipment where AI is used as a safety component. 

  • 31st December 2030

Obligations for AI systems in the areas of security, freedom, justice (for example the Schengen Information System) come into effect. AI systems which are part of large-scale IT systems placed on the market before 2nd August 2027 must be compliant. 

EU AI Act Summary

As you can see, the implications of being non-compliant are serious, with violations of the regulations carrying significant penalties.

Here at Privacy Helper we have developed an in-house AI task force to help our clients to be ready for the requirements of the AI Act.

  • There are several things to consider, such as:
  • Is your business prepared?
  • Do you have in-house specialists or experts in AI?
  • What AI tools do you use / intend to use?
  • Do you have an appropriate risk-assessment procedure in place?
  • Have you considered anti-bias, security measures, penetration testing, etc?

Whatever your concerns or questions, we are here to provide help and support.

Contact us and we will be in touch to assist you with your EU AI Act needs.