Earlier this month, the Spanish Data Protection Authority (Agencia Española de Protección de Datos, or AEPD) issued new guidance on the privacy and data protection risks associated with uploading images or photos – whether directly or indirectly identifying individuals – into generative AI tools. The guidance is particularly focused on situations where those images are hosted by third‑party online services or digital platforms.
The AEPD published this guidance in response to the sharp rise of AI‑generated content on social media in recent months. According to the regulator, such content is often created or processed without an individual’s authorization – or even awareness – and may be reused, repurposed, or circulated for opaque purposes by various actors across the digital ecosystem.
Key Takeaways from the AEPD’s Guidance
The AEPD highlights both obvious (visual) risks and less visible (non‑visual) risks arising from the use of personal images in generative AI systems. Both types can affect the legality of processing under the General Data Protection Regulation (GDPR), and in some cases may make processing outright unlawful.
Obvious (Visual) Risks
– Reasonable expectations and legal basis
The secondary use of individuals’ images to generate new AI‑created content may fall outside what a person could reasonably expect. This can undermine the validity of the legal basis relied on and may conflict with GDPR principles such as purpose limitation.
– Scale and ease of dissemination
AI‑generated content can reach large audiences quickly, and can be reshared with minimal effort. This increases the risk of loss of control over personal data and may raise concerns under the GDPR fairness principle.
– Difficulty removing content
Challenges in deleting content – or exercising rights to request its removal – heighten the risk of negative impacts on individuals’ rights.
– Sexualization and synthetic intimate content
Allowing (or failing to prevent) manipulations that add nudity, eroticization, or sexual innuendo dramatically increases the risk of harm, including harassment, blackmail, or unauthorized viral dissemination.
– False associations and reputational harm
Tools that allow changes to context or the attribution of untrue statements can easily result in reputational damage to individuals.
– Impact on vulnerable individuals
Risks become more severe when the content involves children, the elderly, or individuals with specific vulnerabilities, as harmful or sensitive material can have especially serious consequences.
Less Obvious (Non‑Visual) Risks
– Loss of control due to third‑party hosting
Behind the scenes, platform configurations may enable reuse, sharing, or indefinite storage of content – sometimes without authorization – creating additional layers of risk.
– Long and complex supply chains
Platforms often rely on multiple technology providers (e.g., cloud storage, infrastructure operators). Weak controls or inadequate oversight can increase the likelihood of unauthorized downstream use or personal data breaches.
– Unclear purposes for further processing
Platforms may process hosted content for additional purposes – such as security, misuse detection, or service improvement – that individuals may not be aware of. If these purposes are not compatible with the original collection purpose, the lawfulness of processing may be affected.
– Lack of transparency and rights challenges
When people do not know their image is being used or reused, they cannot understand or control how their data is processed, nor can they effectively exercise their GDPR rights.
– “Multiplier effect” of GenAI
Generative AI can rapidly create and distribute numerous derivative versions of content, amplifying risks and making it harder for individuals to regain control.
In addition to GDPR concerns, the AEPD notes that misuse of AI‑generated images may also infringe other fundamental rights – such as honor, reputation, or image rights – which are governed by national laws across EU Member States.
Companies that enable or facilitate content creation – especially social media platforms, video‑sharing services, and other hosting providers – as well as organizations deploying internal GenAI systems, should carefully assess and mitigate these risks. This applies equally to companies outside the EU that offer services within Europe or target EU‑based individuals.
For more information on addressing these risks when developing or deploying generative AI systems, please contact A&B’s Privacy, Cyber & Data Strategy Team.
