European Commission publishes draft guidelines on AI system definition under EU AI Act

February 12, 2025

The European Commission has published guidelines on the AI system definition which explain the practical application of the definition in Article 3(1) of the AI Act:

“a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.

The Commission guidance aims to help organisations decide if a software system constitutes an AI system, to facilitate the effective application of the rules.

The guidelines on the AI system definition are not binding. They are designed to evolve over time and will be updated in light of practical experiences, new questions and use cases that arise.

The Commission has published the guidelines as well as the Guidelines on prohibited artificial intelligence (AI) practices, as defined by the AI Act.

The AI Act, which aims to promote innovation while ensuring high levels of health, safety, and fundamental rights protection, classifies AI systems into different risk categories, including prohibited, high-risk, and those subject to transparency obligations. Since 2 February, the first rules under the AI Act have applied. These include the AI system definition, AI literacy, and prohibited AI use cases that pose unacceptable risks in the EU, as set out in the AI Act.

The guidelines are draft but the Commission has not yet indicated when they might become final.