Tom Whittaker and Ryan Jenkins look at the increasing importance of AI literacy in the EU and more globally
AI literacy in simple terms is about developing skills and understanding regarding how to develop and use AI, its opportunities and risks. Contracts, governance and internal policies may require or encourage some form of AI literacy. However, some AI-specific regulation in particular is looking to mandate a form of AI literacy. In this article, we summarise the requirements of AI literacy in proposed and enacted AI-regulation, including the EU AI Act.
European Union
The EU AI Act came into force from1 August 2024 and includes specific AI literacy requirements under Article 4.
Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.
Who do the AI literacy obligations apply to?
Providers and deployers of an AI system are responsible for taking measures relating to AI literacy.
- A provider is a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge.
- A deployer is a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity. Note that the rules do not apply to a specific type of AI system. They do not refer to, for example, a high-risk AI system, and they are contained in an Article distinct from obligations on providers of high-risk AI systems.
The AI Act, including the AI literacy obligations, apply to providers outside the EU too.
Who needs to have sufficient AI literacy?
The EU AI Act requires that AI literacy is in place for providers’ and developers’ ‘staff and other persons dealing with the operation and use of AI systems on their behalf.’
It is unlikely that there is one size-fits-all approach to AI literacy. ‘Staff and other persons dealing with the operation and use of AI systems on their behalf’ may be numerous and work in various functions, each with different ‘technical knowledge, experience, education and training’. Consequently providers and developers will need to develop an AI literacy programme, or programmes, tailored to each group who fall within the definition ‘Staff and other persons dealing with the operation and use of AI systems on their behalf’.
Parties will want to consider whether their contracts need to set out who is and is not responsible for AI literacy, what is meant by AI literacy, and who should have sufficient AI literacy and how. For example, deployers may find that to provide AI literacy to its staff who use an AI system information is needed from the provider. Further, individuals who need to have AI literacy under Article 4 may be internal or external employees of the provider or deployer, and they may need to consider whether contracts with any external employees (or the external employee’s employer) include relevant clauses for AI literacy.
What are the literacy rules?
Article 4 does not define AI literacy. The recitals to the Act say that AI literacy should:
- Provide “All relevant actors in the AI value chain” with the insights required to ensure the appropriate compliance and its correct enforcement of the Act.
- Equip providers, deployers and affected persons with the necessary notions to make informed decisions regarding AI systems.
However, it is unclear to what extent the recitals are relevant specifically to AI literacy obligations under Article 4 or relate to the EU’s intention to improve AI literacy more broadly. For example, the recitals state that EU Member States are also encouraged to draw up voluntary codes of conduct to advance AI literacy to drive AI literacy at national level, which is broader than the specific obligations under Article 4. Further, Article 4 places obligations on providers and developers, but the recitals also refer to providing information to and equipping others, namely affected persons. Whilst the end of Article 4 refers to affected persons, that is as a factor about what the AI literacy should involve (taking into account … considering the persons or groups of persons on whom the AI systems are to be used) rather than who should be provided with AI literacy under Article 4.
The recitals also include the following about such informed skills, knowledge and understanding, how they can vary with context and can include, for example:
- understanding the correct application of technical elements during the AI system’s development phase;
- the measures to be applied during its use, the suitable ways in which to interpret the AI system’s output; and
- the knowledge necessary to understand how decisions taken with the assistance of AI will have an impact on them.
Article 4 also says that AI literacy must take into account:
- their workforce’s technical knowledge, experience, education and training;
- the context in which the AI systems are to be used; and
- the individuals or groups on whom the AI systems are to be used.
Expect the European Commission and Member States to publish voluntary codes of conduct which may assist.
From when do the AI literacy rules apply?
The AI Act came into force on 2 August 2024, so the AI literacy provisions come into force 6 months later on 2 February 2025 (Article 113(2)).
What are the consequences for non-compliance?
The AI Act does not include specific provisions for non-compliance with Article 4 AI literacy obligations. However, if any provider or deployer provides incorrect, incomplete or misleading information to notified bodies or national competent authorities in reply to a request, they ‘shall’ be subject to a fine of up to €7.5m, or up to 1% of its total worldwide annual turnover for the preceding financial year, whichever is higher (with different figures for small and medium enterprises). Further, when deciding whether to impose a fine or the amount of fine, ‘all relevant circumstances of the specific situation shall be taken into account’, which could include any steps taken (or not) regarding AI literacy.
Also, be aware that there are discussions about the future of the AI liability directive (AILD), draft EU legislation that seeks to answer the question: ‘If an AI system causes someone harm, intentionally or by a negligent act or omission, will they be able to claim compensation for damages?’ (read more on this here). The Council of the European Union’s Commission Services has provided considerations for potential substantive amendments to the AILD, including whether Article 4 AI literacy obligations are the type that, if breached would result in a rebuttable presumption of a causal link between breach and fault for any loss caused by an AI system.
What is happening internationally?
There is little mention about AI literacy internationally in global reviews such as the Stanford University AI Index (here). What can be seen appears to relate more to a general improvement of understanding AI, rather than an equivalent to Article 4 of the AI Act.
In the UK, the government’s approach to AI regulation does not include AI literacy specifically. There is reference to it in the government response to its consultation on AI regulation – for example, how consultation respondents emphasised AI literacy’s importance, or how in other regulatory domains there are literacy requirements, such as with Ofcom’s media literacy duty under the Online Safety Act – but no indication that it will be required under government regulation.
Whilst there are also other proposals for AI regulation in the UK, specifically the AI Regulation Bill and the Trade Union Congress’ Artificial Intelligence (Regulation and Employment Rights) Bill, there are no specific obligations for AI literacy for providers, deployers or their staff. Improvements to AI literacy may result indirectly though, for example, through the creation of AI Officers under paragraph 4 of the AI Regulation Bill, should it be enacted.
Instead, AI literacy is likely to be on the agenda in the UK in other ways. In January 2024, the UK government also issued the first version of its Generative AI framework for government (read more here). The framework sets out “ten common principles to guide the safe, responsible and effective use of generative AI in government organisations”. Some of the key principles which link to the AI literacy are:
- Principle 1: You know what generative AI is and what its limitations are
- Principle 5: You understand how to manage the full generative AI lifecycle
- Principle 9: You have the skills and expertise that you need to build and use generative AI
Also, AI literacy appears within the UK government’s AI strategy, under sections such as skills for jobs, and may appear within the UK government’s AI Action plan.
By way of other examples, take the US where various draft bills have been introduced to improve access to training for AI literacy. For example:
- the AI Leadership Training Act, a bill which requires ‘the Office of Personnel Management (OPM) to develop and implement an annual training program on artificial intelligence (AI) for federal managers, supervisors, and other employees designated to participate in the program.’
- the Artificial Intelligence Literacy Act of 2023, amending the Digital Equity Act of 2021 , which ‘requires the National Telecommunications and Information Administration to establish grant programs for promoting digital equity, supporting digital inclusion activities, and building capacity for state-led efforts to increase adoption of broadband by their residents’ – to include ‘the skills of AI literacy which mean ‘the skills associated with the ability to comprehend the basic principles, concepts, and applications of artificial intelligence, as well as the implications, limitations, and ethical considerations associated with the use of artificial intelligence.’
These are still at the early legislative stages.
AI literacy on the agenda
AI literacy is likely on the agenda for many organisations, whether developing AI models or systems, or looking to use AI, not least those organisations that seek to utilise and benefit from AI systems. This will in part now also be the result of both how current and anticipated AI regulations create obligations directly, or indirectly, relevant to AI literacy. It is also the result of how AI literacy is becoming important as part of the overall development of responsible and trustworthy AI systems and use.
Tom Whittaker is a Director and solicitor advocate in Burges Salmon’s Dispute Resolution team.
Ryan Jenkins is a Solicitor in Burges Salmon’s Dispute Resolution team
This article was first published on the Burges Salmon website and is reproduced with their permission.