Article 4 of the Artificial intelligence act requires AI system providers and implementers to ensure a sufficient level of AI literacy for their staff and others dealing with AI systems on their behalf.
Definitions |
Compliance |
Enforcement |
Providers and deployers of AI systems should take measures to ensure a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf. They should do so by taking into account their technical knowledge, experience, education and training of the staff and other persons as well as the context the AI systems are to be used in and the persons on whom the AI systems are to be used. |
The concept of AI literacy mentioned in article 4 of the AI Act relies on the definition of the term given in article 3(56) of the AI Act, according to which:
‘AI literacy’ means skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.
|
Article 4 of the AI Act is a key provision to ensure that all providers and deployers of AI systems equip their staff with the right skills, knowledge and understanding of the system(s) provided or deployed. This concerns anyone in the organisation directly dealing with an AI system and reinforces the provisions of transparency (Article 13 of the AI Act) and human oversight (Article 14 of the AI Act) included in the Regulation. At the same time, Article 4 indirectly contributes to the protection of affected persons, because it ensures an effective application of the AI Act rules.
|
Article 4 of the AI Act does not entail an obligation to measure the knowledge of AI of employees. Yet, it affirms that AI providers and deployers should ensure a sufficient level of AI literacy taking into account the technical knowledge, experience, education and training of employees. |
Yes, the AI Act distinguishes between AI models (GPAI models), AI systems, including GPAI systems, prohibited, high-risk systems. For more details, please also see the Guidelines on AI system definition, published by the Commission on 6 February to assist providers and other relevant persons in determining whether a software system constitutes an AI system and facilitate the effective application of the rules. These guidelines, which are non-binding and are designed to evolve over time, explain the practical application of the legal concept, as anchored in the AI Act. The Guidelines on AI systems definition were published in addition to the Guidelines on prohibited artificial intelligence (AI) practices, as defined by the AI Act. |
For the time being information on Commission’s activities in relation to article 4 of the AI Act can be found on the AI Pact webpage, including the recording of the webinar that took place on 20 February and the living repository on AI literacy. A dedicated webpage on AI literacy and skills is under preparation. |
The AI Office will not impose strict requirements regarding Article 4 of the AI Act and its “sufficient level of AI literacy”. On the contrary, it considers necessary a certain degree of flexibility, considering the broad topic of AI literacy and the fastevolving technology that AI is. Yet, as a minimum, to comply with Article 4 of the AI Act, providers and deployers of AI systems should: |
a) | Ensure a general understanding of AI within their organisation: |
What is AI? How does it work? What AI is used in our organisation? What are its opportunities and dangers? |
|
b) | Consider the role of their organisation (provider or deployer of AI systems): |
Is my organisation developing AI systems or just using AI systems developed by another organisation? | |
c) | Consider the risk of the AI systems provided or deployed: |
What do employees need to know when dealing with such AI system? What are the risks they need to be aware of and do they need to be aware of mitigation? | |
d) |
Concretely build their AI literacy actions on the preceding analysis, considering: |
|
||
|
Considerations a, b, c, and d include legal and ethical aspects. Therefore, connections to the EU AI regulation (i.e., understanding of the AI Act) and to principles of ethics and governance are encouraged. |
As reported in the answer to the previous question, to comply with article 4 of the AI Act, organisations should consider their role (being providers or deployers of AI systems) as well as the risks associated to the AI systems they provide and/or deploy. According to this, organisations should adapt their AI literacy approach. For example, if the AI systems of the organisation are high-risk, according to Chapter III of the AI Act, additional measures might be relevant to ensure employees are aware of how to deal with the given AI systems and avoid and/or mitigate their risks. |
This depends on the organisation’s answers to the considerations in question 1. Yet, in many cases, simply relying on the AI systems’ instructions for use or asking the staff to read them might be ineffective and insufficient. Article 4 of the AI Act is intended to provide trainings and guidance as most appropriate on the basis of each target group’s level and type of knowledge, as well as given the context and purpose of the AI systems in used in the organisation. This is also in alignment with other provisions of the AI Act. For example, Article 26 introduces an obligation for deployers of high-risk systems to ensure that the staff dealing with the AI systems in practice is sufficiently trained to handle the system and ensure human oversight. Relying on the instructions of use is therefore not sufficient, further measures are necessary. |
There is no one size fit all when it comes to AI literacy and the AI Office does not intend to impose strict requirements or mandatory trainings. The requirements for a training depend on the concrete context. While replicating the practices collected does not automatically grant presumption of compliance with Article 4, the initiatives in the living repository on AI literacy could provide some inspiration. |
No, the AI Office does not impose requirements for specific sectors. Yet, as reported in the answer to question 1, the context – including the sector and the purpose – in which AI systems are provided/deployed should be relevant when developing and AI literacy initiative. Moreover, the level of risk of AI systems should be considered. |
This depends on the concrete type of AI system and risk (e.g. for high risk). In general, people working for a service provider or contractor need to have the appropriate AI skills to fulfil the task in question (same as the employees). |
Yes, they should be informed about the specific risks, for example hallucination. |
These are two distinct questions. The employees and the human-in-the-loop need the appropriate skills, targeted as to the system they are using. |
Normally yes, but it depends on the AI tool in question and their specific qualification. This is particularly relevant in view of the speed of the technological developments. The organisation should still consider the steps in the answer to question 1 and asks itself: Do these technical employees know what needs to be known about the AI systems of the organisation and how to deal with them? Are they aware of all risks and how to avoid/mitigate them? Moreover, the organisation should consider what else these employees might need to know; e.g., legal and ethical aspects of AI. |
Yes, article 4 of the AI Act encourages providers and deployers to consider the knowledge, experience, education and training of employees and other persons to provide a sufficient level of AI literacy. Given the difference between AI systems and that the level of knowledge and experience, as well as the type of education and training received, might vary, having different levels of training or learning approaches can be appropriate. |
There is no need for a certificate. Organisations can keep an internal record of trainings and/or other guiding initiatives. |
No, no specific governance structure is mandated to comply with article 4 of the AI Act. |
For the moment, guidance will be provided via further examples of practices, webinars and clarifications via this Q&A. |
The Commission will publish guidelines on the application of the requirements and obligations referred to in Articles 8 to 15 and in Article 25 AI Act, and these guidelines will also touch upon issues of literacy, when discussing for example humanoversight or risk-management. |
The AI@EC Communication already identified as operational action to Develop a policy to build and maintain an AI-skilled workforce. The European Commissions has already implemented several measures for its staff regarding AI literacyA:
|
Currently, many of the agencies have already access to the Commission’s learning platform (EU-Learn) as well as to resources such as the AI learning packages and other trainings of the Commission. |
To support the implementation and compliance with article 4 of the AI Act, the AI Office will continue nurturing the living repository on AI literacy practices, gathering further examples from organisations, and updating the Q&A at hand. Further awareness activities will be organised and a dedicated webpage for activities related to AI literacy (within and beyond the remit of article 4), skills and talent will be launched with the aim of promoting access to AI literacy and foster dialogue on AI for all. |
The AI Office highly values the insights and expertise of all stakeholders, including the industry. For this reason, we have created the AI Pact to foster the creation of a collaborative community, where stakeholders can share best practices and internal policies that may be of use to others in their compliance journey. With respect to AI literacy, within the AI Pact we have recently published a living repository of AI literacy practices; any provider and/or developer of AI systems that has put in place an AI literacy programme is invited to submit their contribution. |
In our News section you will find information about the most relevant events, talks and conferences in the field of artificial intelligence.
General approach to the European Artificial Intelligence Regulation. Artificial intelligence...
We use third-party cookies for analytical purposes, in summary we only use Google Analytics cookies to analyse our traffic.