The European Parliament Internal Market Committee and the Civil Liberties Committee have adopted a draft negotiating mandate on the AI Act. MEPs say that they aim to ensure that AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory, and environmental friendly. They also want to have a uniform definition for AI designed to be technology-neutral, so that it can apply to both current and future AI systems.
Risk-based approach to AI – prohibited AI practices
The rules follow a risk-based approach and establish obligations for providers and users depending on the level of risk the AI can generate. AI systems with an unacceptable level of risk to people’s safety would be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities or are used for social scoring (classifying people based on their social behaviour, socio-economic status, personal characteristics).
MEPs have amended the European Commission’s draft to include bans on intrusive and discriminatory uses of AI systems such as:
- “Real-time” remote biometric identification systems in publicly accessible spaces;
- “Post” remote biometric identification systems (there is an exception for law enforcement for the prosecution of serious crimes after judicial authorisation);
- Biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);
- Predictive policing systems (based on profiling, location or past criminal behaviour);
- Emotional recognition systems in law enforcement, border management, workplace, and educational institutions; and
- Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases.
High-risk AI
The Committees have amended the definition of high-risk areas to include harm to people’s health, safety, fundamental rights or the environment. The new high risk list also includes AI systems to influence voters in political campaigns and in recommender systems used by social media platforms (with more than 45 million users under the Digital Services Act).
General-purpose AI – transparency measures
The new draft also includes obligations for providers of foundation models. These would have to guarantee robust protection of fundamental rights, health and safety and the environment, democracy and rule of law. They would need to assess and mitigate risks, comply with design, information and environment requirements and register in the EU database.
Generative foundation models, like GPT, would have to comply with additional transparency requirements, like disclosing that the content was generated by AI, designing the model to prevent it from generating illegal content and publishing summaries of copyrighted data used for training.
Supporting innovation and protecting individuals’ rights
The Committees aim to boost AI innovation and so has added exemptions to the rules for research activities and AI components provided under open-source licences. The new law also promoted regulatory sandboxes established by public authorities to test AI before its deployment.
MEPs also want to boost individuals’ right to file complaints about AI systems and receive explanations of decisions based on high-risk AI systems that significantly affect their rights. The draft also reforms the role of the EU AU Office, which would be tasked with monitoring how the AI rules are implemented.
Next steps
Before negotiations with the Council on the final form of the law can begin, the draft negotiating mandate needs to be endorsed by the whole Parliament, with the vote expected during the 12-15 June session.