Google Threat Intelligence Group (GTIG) recently reported that cybercriminals—in particular, state-sponsored threat actors from North Korea, Iran, China, and Russia—are misusing Gemini, Google’s large language model (LLM), to support all stages of their attack lifecycle. Specifically, GTIG observed threat actors using Gemini to code and script tasks, accelerate reconnaissance, research publicly known vulnerabilities, and enable malware development and post-compromise activity.
Examples of State-Sponsored Threat Actor’s Use of Gemini
GTIG documented several examples of state-backed actors integrating Gemini into their operations, including:
- North Korea: GTIG observed the North Korean government-backed group UNC2970 use Gemini to synthesize open-source intelligence (OSINT) and profile high-value targets to support campaign planning and reconnaissance.
- Iran: The Iranian sponsored actor APT42 used generative AI models, including Gemini, to search for official email addresses of specific entities and conduct reconnaissance on business partners as part of phishing operations.
Rise of Model Extraction Attacks
GTIG also observed a notable increase in model extraction attacks, also known as a ‘distillation attack’, primarily from private sector entities seeking to accelerate AI model development at a lower cost. In a model extraction attack, an attacker makes various inputs to an existing AI model and observes its responses to extract information used to train a new model.
Given the rise in model extraction attacks, organizations that provide AI models as a service should continue to monitor application programming interface (API) access for any indications of distillation and extraction activity.
AI-Integrated Malware and “Jailbreak” Ecosystems
Although GTIG has not yet identified threat actors using experimental or fully autonomous AI-enabled techniques, it observed threat actors incorporating conventional AI-generated capabilities in their intrusion operations, such as supporting malware creation, improving existing malware, and researching vulnerabilities.
For instance, GTIG identified the HonestCue malware leveraging Gemini’s API to dynamically generate and execute malicious C# code in memory. Rather than using an LLM to update itself, HonestCue has been observed to use Gemini to generate code that downloads and executes another piece of malware.
GTIG also highlighted the growth of an underground “jailbreak” ecosystem for AI-enabled tools and services to support malicious activities. Contrary to their claims, threat actors are not developing custom models and are instead relying on existing commercial AI models to facilitate illicit activity. One example is Xanthorox, an underground toolkit marketed as an autonomous AI platform for generating phishing content, malware, and ransomware, but was later found to be powered by multiple third‑party commercial AI products, including Gemini.
Conclusion
The increasing misuse of generative AI such as Gemini reflects a rapidly evolving threat landscape in which state-sponsored and financially motivated actors use AI to streamline reconnaissance, phishing, malware development, and post-compromise activity. At the same time, large-scale model extraction attempts highlight growing risks to the AI service integrity and intellectual property. As AI‑enabled threats continue to mature, organizations should consider strengthening safeguards, monitoring AI platform usage, and proactively testing their security to adapt to increasingly AI‑driven adversaries.
