On December 10, the Spanish supervisory authority for the EU AI Act (Agencia Española de Supervisión de Inteligencia Artificial, or AESIA) published a set of 16 detailed guidelines and non-binding checklists (available online here in Spanish) designed to help companies navigate their obligations under the AI Act, which entered into force in August 2024.
The 16 guidelines address the following topics:
- Guidelines No. 1 (26 pages) – Introduction to the AI Act: outline the main principles of the AI Act, including its risk-based approach and the various prohibitions and obligations that apply depending on an AI system’s risk level. They also explain the roles and characteristics of economic operators subject to the law, helping companies understand their position within the AI value chain. Key obligations – such as AI literacy, transparency requirements, and restrictions on high-risk systems – are discussed in detail along with a timeline of important dates for the law’s entry into force.
- Guidelines No. 2 (21 pages) – Practical Examples for Understanding the AI Act: offer detailed examples of AI systems to illustrate how the law’s obligations apply in practice. Among the examples, AESIA describes scenarios such as biometric identification systems in the workplace, AI tools for HR management, and AI systems for diabetes detection. It also provides examples linked to key definitions in the AI Act, including, among others, the marketing of an AI system, intended purpose, training data, biometric identification, deepfakes, general-purpose AI models, and the concept of systemic risk.
- Guidelines No. 3 (47 pages) – Conformity Assessments: explain the AI Act’s requirement to conduct conformity assessments. They detail what a conformity assessment entails, the recommended format for performing such an analysis, practical steps for meeting the requirements under the AI Act, and the standards and norms suggested by AESIA to ensure compliance.
- Guidelines No. 4 (44 pages) – Quality Management Systems: outline the key elements required to establish a quality management system for high-risk AI systems.
- Guidelines No.5 (63 pages) – Risk Management Systems: outline the key elements required to establish a risk management system for high-risk AI systems.
- Guidelines No. 6 (36 pages) – Human Oversight: explain how to integrate human oversight obligations into the design and development of AI systems.
- Guidelines No. 7 (79 pages) – Data and Data Governance: explain how to manage data – including training, validation, and testing datasets – for AI systems in compliance with the AI Act’s requirements.
- Guidelines No. 8 (56 pages) – Transparency: clarify how to implement the AI Act’s transparency requirements in practice, tailored to the role a company assumes in the design, development, and use of AI systems.
- Guidelines No. 9 (62 pages) – Accuracy: explain how to comply with the AI Act’s accuracy requirements, providing concrete examples of measures to implement throughout an AI system’s lifecycle.
- Guidelines No. 10 (73 pages) – Robustness: outline the measures required under the AI Act to ensure the robustness of high-risk AI systems, providing key explanations for both AI system providers and deployers.
- Guidelines No. 11 (79 pages) – Cybersecurity: provide a list of cybersecurity measures along with practical guidance on their implementation, ensuring a level of security that meets the AI Act’s requirements.
- Guidelines No. 12 (34 pages) – Record Keeping: help providers and deployers of AI systems regulated under the AI Act meet their record-keeping obligations throughout the system’s lifecycle. It also aims to promote accountability and transparency in the operations of companies subject to the law.
- Guidelines No. 13 (38 pages) – Post-Market Monitoring: offer explanations and practical examples of the procedures and steps required to implement a post-market monitoring plan after the AI system has been placed on the market and is in operation.
- Guidelines No. 14 (25 pages) – Incident Reporting: outline the steps required to report serious incidents involving high-risk AI systems in compliance with the AI Act’s requirements.
- Guidelines No. 15 (62 pages) – Technical Documentation: detail the required content for the technical documentation of a high-risk AI system, the preferred format, and best practices for storing this documentation.
- Guidelines No.16 (16 pages) – Checklist Manual and Checklists (13 Excel Files) – Checklists prepared by the AESIA: AESIA has also prepared a file containing 13 Excel tables covering key areas of the AI Act. These tables serve as checklists to help companies document the AI Act compliance measures they have adopted. Each checklist also includes forms for assessing the level of compliance of implemented measures and identifying improvements needed to meet the obligations applicable to a given AI system.
These guidelines are highly detailed and include a wide range of practical examples of how to comply with the AI Act, making them particularly useful for companies involved in the production, marketing, and deployment of regulated AI systems. The checklists offer a practical tool for documenting, measuring, and reviewing the measures that companies have implemented to meet AI Act requirements. They are intended to align with tools and recommendations adopted by other EU regulators and may be used by any company subject to the AI Act’s obligations, including organizations established outside Spain or the EU. AESIA has confirmed that these guidelines remain under ongoing review and may be subject to amendment should the European Commission’s Digital Omnibus proposal be adopted – presented on November 19 and described in our previous blog post available here.
If you have any questions about using the AESIA’s AI guidelines or checklists, please contact A&B’s Privacy, Cyber & Data Strategy Team.
