State-Sponsored Hackers Enhance Cyberattacks with AI

State-sponsored hacker groups are exploiting artificial intelligence to accelerate and improve cyberattacks. According to a report from Google's Threat Intelligence Group (GTIG), actors from Iran, North Korea, China, and Russia are using models like Google's Gemini to refine phishing campaigns and develop more sophisticated malware.

The quarterly AI Threat Tracker report reveals how these groups have integrated AI into various stages of the attack cycle, achieving significant productivity gains in reconnaissance, social engineering, and malware development.

"For government-backed threat actors, large language models (LLMs) have become essential tools for technical research, targeting, and the rapid generation of nuanced phishing lures," GTIG researchers stated in the report.

AI-Powered Reconnaissance

The Iranian group APT42 used Gemini to enhance reconnaissance and targeted social engineering operations. The group exploited the AI model to identify official email addresses of specific entities and to conduct research aimed at establishing credible pretexts for approaching targets. By inserting a target's biography into Gemini, APT42 created personas and scenarios designed to solicit interaction. The group also used AI to translate between languages and better understand non-native phrases, capabilities that help state-sponsored hackers overcome traditional phishing warning signs, such as poor grammar or awkward syntax.

The North Korean group UNC2970, specializing in targeting the defense sector and impersonating corporate recruiters, used Gemini to synthesize open-source intelligence and profile high-value targets. The group's reconnaissance included searching for information on major cybersecurity and defense companies, mapping specific technical roles, and gathering salary information.

Model Extraction Attacks on the Rise

In addition to operational use, Google DeepMind and GTIG identified an increase in attempts to extract models, also known as "distillation attacks," aimed at stealing intellectual property from AI models. A campaign targeting Gemini's reasoning capabilities involved over 100,000 prompts designed to force the model to provide complete reasoning processes. The breadth of the questions suggested an attempt to replicate Gemini's reasoning ability in languages other than English in various activities.

AI-Integrated Malware

GTIG observed malware samples, identified as HONESTCUE, that use Gemini's API to outsource functionality generation. The malware is designed to undermine traditional network-based detection and static analysis through a multi-layered obfuscation approach. HONESTCUE functions as a download and launch framework that sends prompts via Gemini's API and receives C# source code as a response. The fileless secondary stage compiles and executes payloads directly in memory, leaving no artifacts on the disk.

Separately, GTIG identified COINBAIT, a phishing kit whose construction was likely accelerated by AI code generation tools. The kit, which masquerades as a major cryptocurrency exchange for credential harvesting, was built using the Lovable AI platform.

Abuse of AI Chat Platforms

In a new social engineering campaign first observed in December 2025, Google saw threat actors abuse the public sharing features of generative AI services, including Gemini, ChatGPT, Copilot, DeepSeek, and Grok, to host deceptive content distributing ATOMIC malware targeting macOS systems.

Attackers manipulated AI models to create realistic-looking instructions for common computer tasks, embedding malicious command-line scripts as the "solution." By creating shareable links to these AI chat transcripts, threat actors used trusted domains to host their initial attack stage.

Thriving Underground Market for Stolen API Keys

GTIG's observations of underground forums in English and Russian indicate a persistent demand for AI-enabled tools and services. However, state-sponsored hackers and cybercriminals struggle to develop custom AI models, instead relying on mature commercial products accessed through stolen credentials. A toolkit, "Xanthorox," advertised itself as a custom AI for autonomous malware generation and phishing campaign development. GTIG's investigation revealed that Xanthorox was not a custom model but was actually powered by several commercial AI products, including Gemini, accessed through stolen API keys.

Google has disabled accounts and resources associated with malicious activity and has strengthened classifiers and models, preventing them from providing assistance for similar attacks in the future.