The full plenary of the European Parliament has voted to approve the EU AI Act by an overwhelming majority:
- 523 MEPs voted in favour;
- 46 MEPs voted against; and
- 49 MEPs abstained.
This follows the political agreement between the member states and MEPs in December 2023. It aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field.
The regulation establishes obligations for AI based on its potential risks and level of impact. AI applications that pose a “clear risk to fundamental rights” will be banned, for example some of those that involve the processing of biometric data. AI systems considered “high-risk”, such as those used in critical infrastructure, education, healthcare, law enforcement, border management or elections, will have to comply with strict requirements. Low-risk services, such as spam filters, will be subject to the lightest regulation. The Act also contains provisions on the risks posed by the systems underpinning generative AI tools and chatbots such as OpenAI’s ChatGPT.
So, what happens next?
The regulation is still subject to a final lawyer-linguist check and is expected to be finally adopted before the end of the legislature (through the so-called corrigendum procedure). The law also needs to be formally endorsed by the Council. There may still be minor changes to the text.
It will enter into force twenty days after its publication in the Official Journal, and be fully applicable 24 months after its entry into force, except for:
- bans on prohibited practices, which will apply six months after the date it enters into force;
- codes of practice (nine months after entry into force);
- general-purpose AI rules including governance (12 months after entry into force); and
- obligations for high-risk systems (36 months).