The AI Act came into force on 1 August. It aims to ensure that AI developed and used in the EU is trustworthy, with safeguards to protect people’s fundamental rights. It also aims to establish a harmonised internal market for AI in the EU, encouraging AI’s uptake and creating a supportive environment for innovation and investment.
The AI Act introduces a product safety and risk-based approach in the EU:
- Minimal risk: Most AI systems, such as AI-enabled recommender systems and spam filters, fall into this category. These systems attract no obligations under the AI Act as they are considered to pose minimal risk to people’s rights and safety. Organisations can voluntarily adopt additional codes of conduct.
- Specific transparency risk: AI systems like chatbots must clearly disclose to users that they are interacting with a machine. Certain AI-generated content, including deepfakes, must be labelled as such, and users need to be informed when biometric categorisation or emotion recognition systems are being used. As well as this, providers must design systems so that synthetic audio, video, text and images content is marked in a machine-readable format, and detectable as artificially generated or manipulated.
- High risk: AI systems identified as high-risk are required to comply with strict requirements, including risk-mitigation systems, high quality of data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy, and cybersecurity. Regulatory sandboxes aim to facilitate responsible innovation and the development of compliant AI systems. Such high-risk AI systems include, for example, AI systems used for recruitment, or to assess whether somebody is entitled to get a loan, or to run autonomous robots.
- Unacceptable risk: AI systems considered a clear threat to the fundamental rights of people are banned. This includes AI systems or applications that manipulate human behaviour to circumvent users’ free will, such as toys using voice assistance encouraging dangerous behaviour of minors, systems that allow “social scoring” by governments or companies, and certain applications of predictive policing. In addition, some uses of biometric systems will be prohibited, for example emotion recognition systems used at the workplace and some systems for categorising people or real time remote biometric identification for law enforcement purposes in publicly accessible spaces (with narrow exceptions).
To complement this system, the AI Act also introduces rules for so-called general-purpose AI models, which are highly capable AI models that are designed to perform a wide variety of tasks like generating human-like text. General-purpose AI models are increasingly used as components of AI applications. The AI Act aims to ensure transparency along the value chain and addresses possible systemic risks of the most capable models. The European Commission has issued a consultation on general purpose AI models, which ends on 10 September.
Application and enforcement of the AI Act
Member states have until 2 August 2025 to designate national competent authorities, which will oversee the application of the rules for AI systems and carry out market surveillance activities. The Commission’s AI Office will be the key implementation body for the AI Act at EU-level, as well as the enforcer for the rules for general-purpose AI models. The European Artificial Intelligence Board is charged with ensuring a uniform application of the AI Act across EU member states and will act as the main body for cooperation between the Commission and the member states.
Organisations not complying with the rules will be fined. Fines could go up to 7% of the global annual turnover for violations of banned AI applications, up to 3% for violations of other obligations and up to 1.5% for supplying incorrect information.
Next steps
Most provisions of the AI Act will start applying on 2 August 2026. However, AI systems deemed to present an unacceptable risk will be banned in six months’ time and the rules for General-Purpose AI models will apply from August 2025.
To bridge the transitional period before full implementation, the Commission has launched the AI Pact, which encourages AI developers to voluntarily adopt key obligations of the AI Act before they are legally obliged to.