The European Data Protection Board and the European Data Protection Supervisor have adopted a joint opinion on the European Commission’s proposed AI regulation.
The EDPB and the EDPS strongly welcome the aim of addressing the use of AI systems within the EU, including the use of AI systems by EU institutions, bodies or agencies. At the same time, the EDPB and EDPS are concerned by the exclusion of international law enforcement cooperation from the scope of the proposed regulation.
The EDPB and EDPS also stress the need to explicitly clarify that existing EU data protection legislation applies to any processing of personal data coming within the scope of the draft AI Regulation.
Although the EDPB and the EDPS welcome the risk-based approach in the AI Regulation, they consider that the concept of “risk to fundamental rights” should be aligned with the EU data protection framework. The EDPB and the EDPS recommend that societal risks for groups of individuals should also be assessed and mitigated. In addition, they agree that the classification by the AI Regulation of an AI system as high-risk does not necessarily mean that it is lawful per se and can be deployed by the user as such. The EDPB and the EDPS also consider that compliance with legal obligations set out in EU laws, including on personal data protection, should be a precondition for putting a CE marked product on the European market.
The key message relates to identifying individuals in public spaces. Taking into account what they say are the extremely high risks posed by remote biometric identification of individuals in publicly accessible spaces, the EDPB and the EDPS call for a general ban on any use of AI for automated recognition of human features in public spaces, such as recognition of faces, gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals, in any context. They also recommend a ban on AI systems using biometrics to categorise individuals into groups based on ethnicity, gender, political or sexual orientation, or other grounds on which discrimination is prohibited under Article 21 of the Charter of Fundamental Rights. Furthermore, the EDPB and the EDPS consider that the use of AI to infer emotions of a natural person is highly undesirable and should be prohibited, except for very specified cases, such as some health purposes, where the patient emotion recognition is important. Finally, the use of AI for any type of social scoring should also be prohibited.
In addition, the EDPB and the EDPS welcome the fact that the proposed regulation designates the EDPS as the competent authority and the market surveillance authority to supervise EU institutions, agencies and bodies. However, they say that the role and tasks of the EDPS should be further clarified, especially in relation to its market surveillance role.
The EDPB and EDPS say that data protection authorities are already enforcing EU laws in relation to AI systems involving personal data. Therefore, it would make sense to designate them as national supervisory authorities with the aim of ensuring a more harmonised regulatory approach, as well as contributing to a consistent interpretation of data processing provisions across the EU. Consequently, the EDPB and the EDPS consider that, to ensure a smooth application of the AI Regulation, data protection authorities should be designated as national supervisory authorities under Article 59 of the AI Regulation
Finally, the EDPB and EDPS question the designation of a predominant role to the European Commission in the proposed European Artificial Intelligence Board (EAIB), as this would conflict with the need for an AI European body independent from any political influence. To ensure its independence, the AI Regulation should give more autonomy to the EAIB and ensure it can act on its own initiative.