These guides are published without prejudice to the technical guides being prepared by the European Commission, and which aim to ensure the consistent application of European AI regulation. The Spanish guides will provide a basis for the Commission's working group to draw up the European guides.
The Secretary of State for Digitalization and Artificial Intelligence through its General Directorate of Artificial Intelligence, and with the collaboration of the Spanish Agency for Supervision of Artificial Intelligence (AESIA), among other national market surveillance authorities, publish 16 practical guides that support the implementation and compliance with the Artificial Intelligence Act (AI Act).
The guides, fruit of the AI Sandbox, aim to help the Spanish productive fabric that develops or implements high-risk AI systems to comply with current regulations. The documents provide recommendations aligned with the regulatory requirements while waiting for the harmonized standards of application to be approved for all member states; and will help SMEs, start-ups and large companies to develop innovative, reliable and fundamental rights-friendly AI systems.
The guides, which are not binding, nor do they replace or develop the applicable regulations, have been structured in three blocks:
1. Introductory Guides:
2. Technical guides:
- 03. Assessment of conformity. The objective is to guide, within the framework of the AI sandbox, the conformity assessment process to which the high-risk artificial intelligence systems of the Artificial Intelligence Act will be subjected (“CE marking”).
- 04. Quality management system. This guide presents the organizational and technical measures that will be used by vendors and deployment managers to comply with Article 17 of the AI Act, which sets out the quality management requirements that must be incorporated into any high-risk AI system and certain general-purpose AI systems.
- 05. Risk management system. This guide presents the steps needed to identify, analyze, evaluate, and mitigate the potential risks of an AI system that will serve vendors and deployment managers to comply with Article 9 on the risk management system. To facilitate its understanding, an Excel document is incorporated that includes the development process for different use cases.
- 06. Human supervision. The AI Act devotes its article 14 to Human Surveillance on high-risk AI systems. People must be able to make informed autonomous decisions regarding AI systems, for which human surveillance must be at the heart of the system’s functionality.
- 07. Data and data governance. This guide focuses on Article 10 of the AI Act on the data governance requirements that any high-risk AI system and certain general-purpose AI systems should incorporate. Data governance is the set of elements that are implemented to ensure that the data used in training, validation and testing are adequate, relevant, sufficiently representative and meet the established quality and completeness requirements.
- 08. Transparency and provision of information to users. This document provides implementation measures for providers and users of AI systems that facilitate compliance with the obligations expressed in Article 13 of the AI Act, dedicated entirely to transparency.
- 09. Precision. Article 15 of the AI Act addresses the requirements for accuracy, robustness and cybersecurity that the AI system must meet. This guide specifically focuses on accuracy by indicating measures so that AI systems do not degrade their performance and accuracy specifications once they are up and running.
- 10. Solidity. This guide emphasizes the robustness of the AI systems set out in Article 15 of the AI Act. Technical robustness is a key requirement for high-risk AI systems, which must be resilient in relation to harmful or undesirable behaviors. The guide proposes measures to prevent such situations.
- 11. Cybersecurity. As stated in Article 15 of the AI Act, this guide provides cybersecurity measures for AI systems, specifically on artificial intelligence aspects, so that it is integrated into a broader cybersecurity scheme.
- 12. Automatically generated records and log files. This guide outlines the steps that will help providers and AI deployers meet the AI Act’s record generation and retention requirements that any high-risk AI system must incorporate. The development of an appropriate records management system will facilitate other tasks such as transparency or accountability.
- 13. Post-marketing surveillance plan. The AI Act indicates the need to carry out a post-marketing surveillance plan for high-risk AI systems; that is, to design a set of activities led by suppliers and deployment managers to collect and evaluate the experience obtained and thus identify the need to take any action.
- 14. Notification of serious incidents. This guide focuses on Article 73 of the AI Act and describes the procedure for reporting serious incidents, as well as measures to address such a procedure.
- 15. Technical documentation. The purpose of this guide is to explain what the Artificial Intelligence Act requires of technical documentation, how to reflect it and how all required documentation should be kept.
3. Checklist of Requirements Guides:
- 16. Manual of checklist of requirements guides.
- Checklist and examples: includes (in zip file) examples mentioned in the guides and all Excel that contain the checklist tool for each of the 12 requirements on which it is necessary to make a diagnosis:
- Quality management system
- Risk management system
- Human supervision
- Data and data governance
- Transparency
- Precision
- Solidity
- Cybersecurity
- Records
- Technical documentation
- Post-marketing surveillance
- Management of serious incidents
These guides are subject to a permanent evaluation and revision process, with regular updates in accordance with the development of the standards and the different guidelines published from the European Commission, and will be updated once the Digital Bus amending the AI Act is approved.