The European Parliament and Council have reached a provisional agreement on the Artificial Intelligence Act. this hotly discussed Regulation aims to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected from high risk AI, while boosting innovation and making Europe a leader in the field. The rules establish obligations for AI based on its potential risks and level of impact.
The Regulation aligns the definition of AI with that of the OECD.
Banned applications
The draft Regulation prohibits:
- biometric categorisation systems that use sensitive characteristics (for example, political, religious, philosophical beliefs, sexual orientation, race);
- untargeted scraping of facial images form the internet or CCTV footage to create facial recognition databases;
- emotion recognition in the workplace and educational institutions;
- social scoring based on social behaviour or personal characteristics;
- AI systems that manipular human behaviour to circumvent their free will; and
- AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).
Law enforcement exemptions
There will be a series of safeguards and narrow exceptions for the use of biometric identification systems (RBI) in publicly accessible spaces for law enforcement purposes. Its use will be subject to prior judicial authorisation and will be restricted to strictly defined lists of crime. “Post-remote” RBI would be used strictly in the targeted search of a person convicted or suspected of having committed a serious crime.
“Real-time” RBI would comply with strict conditions and its use would be limited in time and locations, for:
- targeted searches for victims (abduction, trafficking, sexual exploitation),
- prevention of a specific and present terrorist threat, or
- the localisation or identification of a person suspected of having committed one of certain specific crime (for example, terrorism, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organisation, or environmental crime).
Obligations for high-risk systems
For AI systems classified as high-risk (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law), clear obligations were agreed. Fundamental rights impact assessments will be required and will also apply to the insurance and banking sectors. AI systems used to influence the outcome of elections and voter behaviour are also classified as high-risk. Individuals will have a right to make complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights.
Guardrails for general AI systems
To account for the wide range of tasks AI systems can accomplish and the quick expansion of its capabilities, it was agreed that general-purpose AI (GPAI) systems, and the GPAI models they are based on, will have to adhere to certain transparency requirements. These include drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training.
There are more stringent obligations for high-impact GPAI models with systemic risk. If these models meet certain criteria they will be required to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the Commission on serious incidents, ensure cybersecurity and report on their energy efficiency. Until harmonised EU standards are published, GPAIs with systemic risk may rely on codes of practice to comply with the Regulation.
Measures to support innovation and SMEs
There was also agreement to promote regulatory sandboxes and real-world-testing, established by national authorities to develop and train innovative AI before it is placed on the market.
A new governance architecture
Following the new rules on GPAI models and the obvious need for their enforcement at EU level, an AI Office within the Commission is set up tasked to oversee these most advanced AI models, contribute to fostering standards and testing practices, and enforce the common rules in all member states. A scientific panel of independent experts will advise the AI Office about GPAI models, by contributing to the development of methodologies for evaluating the capabilities of foundation models, advising on the designation and the emergency of high impact foundation models, and monitoring possible material safety risks related to foundation models.
The AI Board, which would comprise member states’ representatives, will remain as a coordination platform and an advisory body to the Commission and will give an important role to member states on the implementation of the Regulation, including the design of codes of practice for foundation models. Finally, an advisory forum for stakeholders, such as industry representatives, SMEs, start-ups, civil society, and academia, will be set up to provide technical expertise to the AI Board.
Sanctions and entry into force
Non-compliance with the rules can lead to fines ranging from 35 million euro or 7% of global turnover to 7.5 million or 1.5% of turnover, depending on the infringement and size of the organisation.
The provisional agreement provides that the AI Act should apply two years after its entry into force, with some exceptions for specific provisions. It will not be published in the Official Journal for some weeks, as the text needs to be finalised and translated. It is expected to apply from 2026.
For more information, see the Council press release.