Financiado por la Unión Europea
Shared resources

Article 4

Article 4 of the Artificial intelligence act requires AI system providers and implementers to ensure a sufficient level of AI literacy for their staff and others dealing with AI systems on their behalf.

 

Definitions

Compliance

Enforcement

 

 

 

  • What does article 4 of the AI Act provide?

  Providers and deployers of AI systems should take measures to ensure a sufficient level of AI literacy of their staff and other  persons dealing with the operation and use of AI systems on their behalf. They should do so by taking into account their technical knowledge, experience, education and training of the staff and other persons as well as the context the AI systems are to be used in and the persons on whom the AI systems are to be used.

 

  • What is AI literacy for article 4 of the AI Act?

 

The concept of AI literacy mentioned in article 4 of the AI Act relies on the definition of the term given in article 3(56) of the AI Act, according to which:

 

‘AI literacy’ means skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.

 

 

  • Which target group is in scope of article 4 of the AI Act? Who are 'other persons’?

 

Article 4 of the AI Act is a key provision to ensure that all providers and deployers of AI systems equip their staff with the right skills, knowledge and understanding of the system(s) provided or deployed. This concerns anyone in the organisation directly dealing with an AI system and reinforces the provisions of transparency (Article 13 of the AI Act) and human oversight (Article 14 of the AI Act) included in the Regulation. At the same time, Article 4 indirectly contributes to the protection of affected persons, because it ensures an effective application of the AI Act rules.


“Persons dealing with the operation and use of AI systems on behalf of providers/deployers” means that these are not employees, but persons broadly under the organisational remit. It could be, for example, a contractor, a service provider, a client.

 

  • Is there effectively an actual obligation to measure the level of the knowledge of employees?

  Article 4 of the AI Act does not entail an obligation to measure the knowledge of AI of employees. Yet, it affirms that AI providers and deployers should ensure a sufficient level of AI literacy taking into account the technical knowledge,
experience, education and training of employees.

 

  • Are there formal categorisations of the different types of AI systems in the Act such as GenAI,
    Conversational AI, AI assistants? Is there a list with concrete examples?

 

Yes, the AI Act distinguishes between AI models (GPAI models), AI systems, including GPAI systems, prohibited, high-risk systems. For more details, please also see the Guidelines on AI system definition, published by the Commission on 6 February to assist providers and other relevant persons in determining whether a software system constitutes an AI system and facilitate the effective application of the rules.

These guidelines, which are non-binding and are designed to evolve over time, explain the practical application of the legal concept, as anchored in the AI Act. The Guidelines on AI systems definition were published in addition to the Guidelines on prohibited artificial intelligence (AI) practices, as defined by the AI Act.

 

  • Where do we find further documents and videos on Article 4 of the AI Act?

 

For the time being information on Commission’s activities in relation to article 4 of the AI Act can be found on the AI Pact webpage, including the recording of the webinar that took place on 20 February and the living repository on AI literacy. A dedicated webpage on AI literacy and skills is under preparation.

 

 

  • What should be the minimum content to consider for an AI literacy programme complying with
    article 4 of the AI Act?

  The AI Office will not impose strict requirements regarding Article 4 of the AI Act and its “sufficient level of AI literacy”. On the contrary, it considers necessary a certain degree of flexibility, considering the broad topic of AI literacy and the fastevolving technology that AI is. Yet, as a minimum, to comply with Article 4 of the AI Act, providers and deployers of AI systems should:

 

a) Ensure a general understanding of AI within their organisation:
 

What is AI? How does it work? What AI is used in our organisation?

What are its opportunities and dangers?

b) Consider the role of their organisation (provider or deployer of AI systems):
  Is my organisation developing AI systems or just using AI systems developed by another organisation?
c) Consider the risk of the AI systems provided or deployed:
  What do employees need to know when dealing with such AI system? What are the risks they need to be aware of and do they need to be aware of mitigation?
d)

Concretely build their AI literacy actions on the preceding analysis, considering:

   
  • Differences in technical knowledge, experience, education and training of the staff and other persons. How much does the employees/person know about AI and the organisation’s systems they use? What else should they know?
   
  • As well as the context the AI systems are to be used in and the persons on whom the AI systems are to be used. In which sector and for which purpose/service is the AI system being used?

 

  Considerations a, b, c, and d include legal and ethical aspects. Therefore, connections to the EU AI regulation (i.e.,
understanding of the AI Act) and to principles of ethics and governance are encouraged.

 

 

  • Do we have a risk-based approach on following the AI Literacy requirements of article 4 of the
    AI Act?

  As reported in the answer to the previous question, to comply with article 4 of the AI Act, organisations should consider their role (being providers or deployers of AI systems) as well as the risks associated to the AI systems they provide and/or deploy. According to this, organisations should adapt their AI literacy approach. For example, if the AI systems of the organisation are high-risk, according to Chapter III of the AI Act, additional measures might be relevant to ensure employees are aware of how to deal with the given AI systems and avoid and/or mitigate their risks.

 

  • Is an AI training mandatory for article 4 of the AI Act or are other AI literacy initiatives also
    allowed?

 

This depends on the organisation’s answers to the considerations in question 1. Yet, in many cases, simply relying on the AI systems’ instructions for use or asking the staff to read them might be ineffective and insufficient. Article 4 of the AI Act is intended to provide trainings and guidance as most appropriate on the basis of each target group’s level and type of knowledge, as well as given the context and purpose of the AI systems in used in the organisation.

This is also in alignment with other provisions of the AI Act. For example, Article 26 introduces an obligation for deployers of high-risk systems to ensure that the staff dealing with the AI systems in practice is sufficiently trained to handle the system and ensure human oversight. Relying on the instructions of use is therefore not sufficient, further measures are necessary.

 

 

  • What should be the format of a mandatory AI training in companies?

  There is no one size fit all when it comes to AI literacy and the AI Office does not intend to impose strict requirements or mandatory trainings. The requirements for a training depend on the concrete context. While replicating the practices collected does not automatically grant presumption of compliance with Article 4, the initiatives in the living repository on AI literacy could provide some inspiration.

 

 

  • When it comes to compliance with article 4 of the AI Act, are there requirements for specific
    industries, including financial services and healthcare?

  No, the AI Office does not impose requirements for specific sectors. Yet, as reported in the answer to question 1, the context – including the sector and the purpose – in which AI systems are provided/deployed should be relevant when developing and AI literacy initiative. Moreover, the level of risk of AI systems should be considered.

 

 

  • AI literacy extends to other persons acting on deployer’s behalf: should any service provider
    using AI have a contractual obligation to demonstrate AI literacy?

  This depends on the concrete type of AI system and risk (e.g. for high risk). In general, people working for a service provider or contractor need to have the appropriate AI skills to fulfil the task in question (same as the employees).

 

 

  • Does a company, whose employees are using ChatGPT for, e.g., writing advertisement text or
    translating text, need to comply with the AI literacy requirement of Article 4 of the AI Act?

  Yes, they should be informed about the specific risks, for example hallucination.

 

 

  • Does a company, whose employees are using an AI tool with a human-in-the-loop approach,
    comply with the AI training with internal resources?

  These are two distinct questions. The employees and the human-in-the-loop need the appropriate skills, targeted as to the system they are using.

 

 

  • Can we consider people with a degree/experience in AI development as AI literate (in the
    context of article 4 of the AI Act) without taking any further action?

 

Normally yes, but it depends on the AI tool in question and their specific qualification. This is particularly relevant in view of the speed of the technological developments.

The organisation should still consider the steps in the answer to question 1 and asks itself: Do these technical employees know what needs to be known about the AI systems of the organisation and how to deal with them? Are they aware of all risks and how to avoid/mitigate them? Moreover, the organisation should consider what else these employees might need to know; e.g., legal and ethical aspects of AI.

 

 

  • Are AI literacy training concepts allowed to differentiate between different levels of detail?

  Yes, article 4 of the AI Act encourages providers and deployers to consider the knowledge, experience, education and training of employees and other persons to provide a sufficient level of AI literacy. Given the difference between AI systems and that the level of knowledge and experience, as well as the type of education and training received, might vary, having different levels of training or learning approaches can be appropriate.

 

 

  • How do organisations have to document their actions to comply with article 4 of the AI Act and
    the best effort provisions in it? Do they need specific certificates?

  There is no need for a certificate. Organisations can keep an internal record of trainings and/or other guiding initiatives.

 

 

  • Is an AI officer necessary similarly as for the GDPR? Can a DPO and AI officer be the same
    person? Shall an organisation set up an AI governance board?

  No, no specific governance structure is mandated to comply with article 4 of the AI Act.

 

 

 

 

  • Will the AI Office issue guidelines on Article 4 of the AI Act like the guideline for prohibited
    systems published or something comparable or will this be a task for Member States?

 

For the moment, guidance will be provided via further examples of practices, webinars and clarifications via this Q&A.
Further guidance on enforcement might be provided by the relevant national market surveillance authorities once
nominated. The AI Office will work closely with the AI Board on the topic of AI literacy, in alignment with article 66(f) and article 95(2f) of the AI Act.

 

 

  • Since the training is context-specific, will the AI Office issue guidelines for providers of highrisk
    AI systems of Annex III, to assist them on this front?

  The Commission will publish guidelines on the application of the requirements and obligations referred to in Articles 8 to 15 and in Article 25 AI Act, and these guidelines will also touch upon issues of literacy, when discussing for example humanoversight or risk-management.

 

 

  • Does the Commission already have a plan to put in place Article 4 of the AI Act in terms of its
    own employees?

 

The AI@EC Communication already identified as operational action to Develop a policy to build and maintain an AI-skilled workforce. The European Commissions has already implemented several measures for its staff regarding AI literacyA:

  • The creation of internal AI specific web portal as one-stop shop accessible to all staff to the AI related content – AI guidelines, AI training resources, events, and news.
  • Definition on the Commission training platform of AI learning packages, oriented to different targets – generalist, managers, and developers (specialist). These packages contain a curated list of relevant trainings, categorising then on essential, highly recommended and recommended. Additional trainings and recording of webinars are available also in the platform.
  • AI tools trainings - Specific section in the AI portal list the AI tools, available to all staff has been created that includes the relevant learning resources for each tool. There are periodic Q&A sessions on using AI in your daily work.
  • An AI community of practice exists where any person can do questions related to AI and interact with AI experts.

 

 

  • How does the AI Office plan to support EU agencies in developing their AI literacy programs?

  Currently, many of the agencies have already access to the Commission’s learning platform (EU-Learn) as well as to
resources such as the AI learning packages and other trainings of the Commission.

 

 

  • What additional guidance and resources do the AI Office plan to release in the near future?
    Will the AI Office share a rubric to test for compliance with AI literacy??

  To support the implementation and compliance with article 4 of the AI Act, the AI Office will continue nurturing the living repository on AI literacy practices, gathering further examples from organisations, and updating the Q&A at hand. Further awareness activities will be organised and a dedicated webpage for activities related to AI literacy (within and beyond the remit of article 4), skills and talent will be launched with the aim of promoting access to AI literacy and foster dialogue on AI for all.

 

 

  • How could industry organisations be of any help for the development of AI literacy?

 

The AI Office highly values the insights and expertise of all stakeholders, including the industry. For this reason, we have created the AI Pact to foster the creation of a collaborative community, where stakeholders can share best practices and internal policies that may be of use to others in their compliance journey. With respect to AI literacy, within the AI Pact we have recently published a living repository of AI literacy practices; any provider and/or developer of AI systems that has put in place an AI literacy programme is invited to submit their contribution.
On a regular basis, the AI Office will verify that all contributions received meet the minimum criteria of transparency and reliability before accepting them into the public repository. During the verification period, the survey might be temporarily closed. 

 

 

 

 

 

 

Logo Comisión Europea

Actuality


In our News section you will find information about the most relevant events, talks and conferences in the field of artificial intelligence.

Shared resources

Artificial intelligence act

General approach to the European Artificial Intelligence Regulation. Artificial intelligence...

Shared resources

“AI Continent”

The European Union has launched this April the “AI Continent”...

Copyright© 2025 AESIA - All Rights Reserved

We use third-party cookies for analytical purposes, in summary we only use Google Analytics cookies to analyse our traffic.

More information