The AI Act sets out transparency obligations for providers and deployers of certain AI systems.
This Q&A accompanies the Commission's consultation and call for expression of interest for stakeholders to provide input for the guidelines and a Code of Practice on transparent generative AI systems and to participate in the Code of Practice process.
-
What does Article 50 of the AI Act entail? How do these obligations protect people against manipulation, deception, and other risks identified by the AI Act?
| |
- Transparency regarding the artificial origin of AI-generated or AI-manipulated content is essential. The availability of a variety of AI systems with growing capabilities to generate all kinds of content makes it increasingly hard to distinguish AI content from human-generated and authentic content. This is raising new risks of misinformation and manipulation at scale, fraud, impersonation and consumer deception.
- In this context, Article 50 of the AI Act sets out transparency obligations for providers and deployers of certain AI systems, including generative and interactive AI systems and deep fakes.
- These obligations intend to reduce the risks of deception, impersonation and misinformation and to foster trust and integrity in the information ecosystem. People will know when they are interacting with AI or exposed to AI generated content, which will help them to take informed decisions.
|
-
In which cases and for which types of AI systems do these transparency obligations apply?
| |
- Article 50 of the AI Act covers four types of AI systems. First, providers of AI systems that interact with people must inform them that they are interacting with an AI system and not a human, unless this is obvious.
- Second, providers of AI-generated or manipulated content must facilitate identification and mark such content in a machine-readable manner and enabling related detection mechanisms.
- Third, deployers of emotion recognition or biometric categorisation systems must ensure that individuals exposed to these systems are informed.
- Fourth, deployers of AI systems generating or manipulating deep fake content, or AI-generated or manipulated text publications intended to inform the public on matters of public interest must inform users about the artificial origin of the content, except in defined cases. Finally, the information provided should be in a clear and accessible format.
|
-
What technical solutions are considered for marking and detecting AI-generated content?
| |
- Techniques or methods for marking and detecting AI-generated content include watermarks, metadata identifications, and cryptographic methods for proving origin and authenticity of content, logging methods, fingerprints or other techniques.
- Relevant techniques and methods should be sufficiently reliable, interoperable, effective and robust as far as this is technically feasible. Providers should also take into account the specificities and the limitations of the different types of content and the relevant technological and market developments in the field.
- Further identifying emerging state-of-the-art techniques and practices will be an important part of the work to be done in the context of the Code of Practice on transparent generative AI systems.
|
-
Why is there a need for a Code of Practice on transparent generative AI systems?
| |
The Code of Practice on transparent generative AI systems will be a voluntary tool to ensure proper compliance with the obligations laid down in Article 50 (2) and (4) of the AI Act for providers and deployers of generative AI systems. If approved by the Commission as adequate, it will provide clear measures for deployers and providers developing or planning to develop in-scope generative AI systems. The Code will guarantee that generative AI systems placed on the European market by adhering providers and deployers are sufficiently transparent, in line with the respective transparency requirements of the AI Act. |
-
Why do we need both guidelines and a Code of Practice on transparent AI systems?
| |
- The practical measures of the Code of Practice will only cover the obligations laid down in Article 50(2) and (4) of the AI Act. The guidelines will cover Article 50 as a whole.
- Additionally, the Code of Practice will be developed through a multi-stakeholder process and will provide technical implementation means.
- The guidelines to be developed and adopted by the Commission will clarify the scope of application, relevant legal definitions, the transparency obligations, the exceptions and related horizontal issues.
|
-
What role will civil society, academia, SMEs, deployers and providers have in shaping the Code?
| |
- Providers and deployers of generative AI systems, including SMEs, subject to the obligations are the main addressees of the Code. Providers of transparency techniques, civil society organisations, academic experts, and other relevant organisations or industry associations are invited to support the drafting process. By involving all these stakeholders, the Code of Practice will facilitate the effective implementation of the respective transparency obligations. The Code will also support practical arrangements for making detection mechanisms accessible and facilitate cooperation with other actors along the value chain.
- Stakeholders are encouraged to express their interest until 9 October to participate in the drafting process. The AI Office will verify eligibility on the basis of submitted and publicly available information and confirm participation to respective stakeholders.
|
-
How does the Code of Practice on transparent generative AI systems interact with the recent Code of Practice on General-Purpose AI?
| |
- The transparency obligations from Article 50 AI Act are complementary to the transparency rules applicable to general-purpose AI models (Articles 53 and 55). The latter have been further detailed in the Code of Practice for General-Purpose AI (GPAI) models and the Commission template for the summary of the content used for the model training.
- Notably, whereas the GPAI Code of Practice focuses on GPAI models and documentation and information to be provided to the AI Office, national competent authorities and downstream providers and transparency of the input training data, the Code on transparent generative AI systems addresses marking techniques and transparency of the AI generated or manipulated outputs towards the persons exposed to them.
- The Code on transparent generative AI systems targets transparency obligations at the system level, including, but not limited to, GPAI systems. Transparency techniques which can be implemented by providers of GPAI models to facilitate transparency obligations for downstream AI system providers will also be considered in the context of this Code.
|
-
What are the next steps following the launch of the public consultation and call for expressions of interest?
| |
- The AI Office will invite all eligible stakeholders to participate to the drafting process of the Code of practice and will select Chairs and Vice-Chairs for the different working groups.
- An opening plenary will take place in early November with all selected participants. The drafting process is expected to last till no later than the beginning of June 2026.
|
-
How will compliance with the AI Act be assessed once the Code of Practice is finalised?
| |
If the Code is approved by the Commission, signatories will be able to rely on the Code to demonstrate compliance with the obligations. With respect to providers and deployers that adhere to the Code, the Commission will focus its enforcement activities on monitoring their adherence to the Code.
They will benefit from increased trust from the Commission and other stakeholders. |