On 18 February 2026, the Spanish Data Protection Authority (Agencia Española de Protección de Datos or ‘AEPD’) published an 81‑page guidance document on the privacy aspects of AI systems operating as agents – commonly referred to as ‘agentic AI’. The guidance is aimed at companies that process personal data under the EU General Data Protection Regulation (GDPR) while designing or using agentic AI.
The AEPD’s guidance explains the concept of agentic AI, identifies the privacy and data protection vulnerabilities inherent in such systems, and outlines the associated risks. It also sets out recommended measures that controllers and processors can implement to mitigate the risks that agentic AI may pose to individuals’ privacy and data protection rights. Throughout its guidance, the AEPD also elaborates on the broader implications of the GDPR for the use of agentic AI, for example, by illustrating how traditional GDPR concepts – such as the determination of controller and processor roles, information and transparency obligations, data subject rights, record keeping requirements (ROPAs), automated decision‑making, data protection impact assessments, and personal data breach management – apply in this context.
What Is Agentic AI?
According to the AEPD, agentic AI refers to an AI system that uses large language models to achieve a specific objective while adapting its behavior based on circumstances and evolving goals. Such a system is capable of learning from experience and making decisions within its perceptual and computational limits. The AEPD adds that agentic AI can operate through multiple subtasks executed sequentially and in a structured manner, forming a coherent chain of reasoning. This chain may rely on various tools to complete each step, including external third‑party services as well as internal systems within the organization that designs or deploys the AI tool.
The AEPD further explains that AI systems classified as agentic typically exhibit 6 key characteristics (which may be more or less pronounced depending on the AI agent):
- Autonomy: The ability to operate without continuous human intervention;
- Environmental perception: The capacity to sense and process real‑time inputs through sensors, cameras, APIs, and similar tools to interpret dynamic contexts;
- Action-taking capabilities: The ability to perform external actions beyond generating text or code, such as sending information, interacting with users, executing contracts, or controlling devices;
- Proactivity: Anticipating needs or issues rather than merely reacting to prompts or events;
- Planning and reasoning: Evaluating alternatives, prioritizing optimal outcomes, and sequencing actions effectively; and
- Memory and adaptability: Accumulating experience, adjusting behavior based on user responses, and improving iteratively through feedback or self-assessment.
The AEPD provides several practical examples showing in detail how agentic AI tools can be configured around their own chain of reasoning (referred to as ‘pipelines’), which may vary in length and complexity. These pipelines can involve numerous data processing operations across different systems, formats, and levels of trust, and operate within an architecture that reflects varying degrees of operational independence. For example, an AI agent may autonomously initiate service calls – connecting to APIs, databases, websites, or other tools – and use them as required, all within parameters that can be pre-defined by humans. The AEPD also describes these technical architectures through concrete scenarios, such as a company deploying an agentic AI system to manage all aspects of an employee’s business trip. In this case, the agent accesses the employee’s calendar and autonomously performs tasks including contacting accommodation providers, checking currency exchange rates, reviewing transport options, purchasing tickets, and consulting weather forecasts.
What Are the Privacy Risks of Agentic AI?
Despite the benefits that AI agents can bring, the Spanish authority emphasizes how the integration of agentic AI into corporate processes introduces a new and expanded attack surface that must be carefully managed, particularly to avoid significant risks to the privacy and data protection rights of individuals. In this regard, the AEPD explains that two categories of risks must be considered:
- Risks arising from authorized processing: Even when the processing of personal data through agentic AI complies with GDPR requirements – such as identifying an appropriate legal basis and informing data subjects – significant threats may still emerge and create meaningful risks for individuals, including (among others):
- Lack of accountability: Agentic AI can operate across important and complex workflows that affect multiple aspects of internal corporate governance. If agentic AI is not properly integrated into an organization’s governance framework, it may become difficult for controllers to demonstrate GDPR compliance and meet their accountability obligations;
- Poor management of user data access: Integrating AI agents without properly configuring access rights and restrictions within an organization can lead to significant privacy issues, including excessive processing of personal data, unauthorized disclosure to third parties beyond the intended purposes, processing of inaccurate or outdated information (e.g., historical data no longer relevant to specific agentic AI actions), and risks to data integrity due to inadequate controls over data modification, enrichment, or alteration;
- Inadequate governance of the multiple processing purposes embedded in an AI system’s chain of reasoning: This can give rise to various issues, such as task planning that does not align with the intended purposes, insufficient oversight of third parties involved in the processing activities, violations of the GDPR’s data‑minimization and accuracy principles – for example, due to the use of outdated or incorrect data – risks of unauthorized automated decision‑making, and a lack of transparency that undermines appropriate reporting, assurance of results, and explainability. The AEPD also highlights additional threats that can arise from insufficient management of the pipeline and its components. These include misalignment – where the agent begins pursuing objectives that no longer correspond to those intended by the user, the organization, or compliance obligations – and feedback loops, which stem from the AI system’s long‑term memory and can amplify biases, cause behavioral drift, and distort decision‑making through reliance on contaminated data;
- Shadow‑leak exfiltration: Even when interactions within the AI’s chain of reasoning are lawful, there remains a risk of silent, gradual leakage of (often sensitive) data that evades security controls but can still enable attackers to reconstruct valuable confidential information. The AEPD explains that this may occur, for example, when an attacker extracts memory sets or contextual data through repeated queries about past decisions, successive reformulations, or by inferring patterns stored in long‑term memory;
- Poor management of human oversight: Failing to allocate sufficient resources or training, assign well-defined roles, or provide adequate motivation and incentives to the teams responsible for supervising the use of AI agents can undermine effective risk management. This may lead to shortcomings in reporting, difficulties in identifying breaches, or situations in which individuals may be discouraged from properly escalating incidents that have been detected; and
- Additional risks may stem from insufficient compartmentalization of an AI agent’s memory, inadequate filtering and cleansing of unstructured data and metadata, and excessive data retention. Further risks include the unavailability of third‑party services that supply data to the AI, and insufficient due diligence in supply‑chain management.
- Risks arising from unauthorized processing: Beyond the usual risks of GDPR non-compliance arising from unauthorized processing of personal data – such as breaches of the principles of lawfulness, integrity, or purpose limitation – additional threats may undermine the security of personal data processed through agentic AI, including (among others):
-
- (Direct or indirect) prompt injection: This involves manipulating the AI system into behaving in ways not intended or configured by its designers. Such attacks can, for example, cause the AI to disregard established guidelines or policies, potentially enabling unauthorized, biased, or excessive processing of personal data. Multimodal agents that process diverse data sources are particularly vulnerable to this type of manipulation. The AEPD offers concrete examples of prompt injection threats, including memory poisoning, zero‑click attacks, data exfiltration, session hijacking and lateral movement, context confusion, and privilege‑escalation techniques;
- Availability and resilience risks: Relying on multiple external sources and services that fall outside the control of the controller increases exposure to service interruptions, impersonation attempts, and denial‑of‑service attacks. Such disruptions can paralyze the AI agent, create gaps in the availability of personal data, or lead to erroneous outputs that can significantly affect the individuals whose personal data is being processed; and
- Unauthorized access to agentic AI memory: Unauthorized intrusion by third parties into complex agentic AI environments – including activity logs, system components, and connected services – may enable attackers to extract entire chains of personal data processed by the tool.
What Measures Can be Taken to Mitigate Privacy Risks?
The AEPD explains that numerous measures can be implemented to mitigate threats that could render processing operations non‑compliant with the GDPR, and groups these measures into the following categories:
- Governance and management processes: Establishing an appropriate information‑governance framework – covering both the AI agent and its deployment throughout its life cycle – is likely the most important measure for ensuring compliance with applicable regulations and alignment with organizational objectives. This governance framework should be tailored to the organization, informed by regulatory guidance and relevant industry standards. It must also account for the possibility of errors and unintended consequences, including failures that extend beyond personal‑data breaches. As such, anticipating scenarios involving misuse, errors, biases, gaps, and other unintended effects is essential. The governance program should also involve the Data Protection Officer (DPO) or a data protection advisor with expertise in applicable regulations and an understanding of the privacy and data protection impacts of agentic AI;
- Continued evidence‑based evaluation of the AI tool: In addition to automating processing operations, organizations are encouraged to automate monitoring processes to ensure ongoing compliance with applicable regulations and internal policies. This should include a structured approach to monitoring AI operations both holistically and at each stage of the chain of reasoning and the services involved. It should also incorporate continuous evaluation methods, such as benchmark testing, human‑in‑the‑loop assessments, and other forms of testing and simulation in real‑world environments;
- Data minimization: Organizations must implement measures to ensure that personal data processing is limited to what is strictly necessary. This includes defining clear policies governing access to organizational information – particularly by specifying which services and data repositories the AI agent may access and enforcing effective access restrictions. It also involves cataloguing data (e.g., through tagging) to identify information appropriate for use by the AI tool, as well as flagging unstructured sources that may be problematic due to their lack of fixed format, which complicates indexing, scalability, and searchability. The AEPD also highlights additional practical measures to support data minimization, such as deploying data‑loss‑prevention tools and applying pseudonymization techniques;
- Memory control: Proper management of an agentic AI system’s memory is essential to ensure the explainability and repeatability of inferences and the traceability of system capabilities – for example, for consent management and responding to data subject rights requests. Organizations should establish procedures to manage, catalogue, and enable the searchability of stored content, quality parameters, and other processing settings. Effective memory control can be achieved through strict retention periods, configurations that disable unnecessary memory storage, and long‑term memory‑sanitization and cleansing techniques designed to detect harmful content, remove unused or obsolete entries, and eliminate unnecessary user credentials; and
- Human oversight: Ensuring appropriate human oversight of AI‑agent capabilities is essential and must occur at every stage of the pipeline, following clearly defined procedures – particularly when configuring the tool’s level of autonomy, validating steps in the chain of reasoning, and implementing human‑approval protocols before the tool executes sensitive actions. Effective oversight requires that human involvement be meaningful and competent. This includes ensuring that those involved in the design or use of agentic AI have the necessary authority, expertise, and training; demonstrate diligence in performing their oversight functions; have access to adequate resources to understand and analyze risks; and can rely on clear escalation paths when issues arise.
What’s Next?
The AEPD guidance is particularly valuable, offering detailed practical and technical examples that help clarify the privacy and data protection challenges associated with designing and using agentic AI tools. Organizations that develop or deploy agentic AI – and that process personal data subject to the GDPR – are strongly encouraged to take this guidance into account, as it may reflect the position of other data protection regulators in Europe (that so far have not opined on the topic of agentic AI). This also applies to organizations not established in Europe, particularly when they offer AI‑agent tools to customers in the EU, enable AI‑generated content to be used in the EU, or target individuals in the EU – for example, by providing goods or services through AI agents or by monitoring individuals’ behavior using such technology.
For more information on the use of agentic AI systems and their privacy and data protection implications, please contact A&B’s Privacy, Cyber & Data Strategy Team.
