Digital illustration of cyber hackers attempting to clone AI model with promptsPhoto by Tima Miroshnichenko on Pexels

Google has found that hackers sent more than 100,000 prompts to its Gemini AI chatbot in an effort to copy its core abilities. These attacks came from state-backed groups in countries like China, Iran, North Korea, and Russia. The hackers used Gemini at every step of their cyber operations, from planning to spreading malware. This shows up in Google's latest threat intelligence report, which covers activity in late 2025.

Background

Google's Threat Intelligence Group tracks how bad actors use new tech in attacks. Their quarterly AI Threat Tracker looks at the final three months of 2025. It points out a big rise in hackers turning to AI tools like Gemini. These groups come from governments and focus on stealing data or disrupting systems.

For years, state hackers have targeted US companies and government offices. Now they pull in AI to work faster and smarter. Iranian group APT42, for example, has hit media and political targets before. North Korean actors like UNC2970 often pretend to be job recruiters to trick defence workers. Chinese groups such as APT41 and APT31 build tools for spying and breaking into networks. Russia's actors join in with similar tactics.

The report notes that AI helps these groups skip old limits. Before, poor language skills gave away fake emails. Now AI writes smooth messages in any language. It also speeds up research on targets, making attacks harder to spot.

Google saw this misuse grow through 2025. Hackers tested Gemini for tasks like finding emails or building fake profiles. By December, some even hid attack code inside shared AI chats. All this fits a pattern where AI becomes a standard tool for cyber groups.

Key Details

Hackers tried to steal Gemini's 'reasoning' skills with over 100,000 prompts. These were wide-ranging questions in different languages, aimed at mapping how Gemini thinks step by step. Experts call this a distillation attack, where you train a cheap copy from a big model's answers.

State groups used Gemini for real attacks too. Iran's APT42 fed it target bios to build fake personas for phishing. They also got help translating and understanding local sayings to make emails look real.

Reconnaissance and Phishing

North Korea's UNC2970 used Gemini to dig into defence firms. It pulled job details, salaries, and company info to craft recruiter scams. This blurs lines between normal job hunts and spy work.

A China-linked group, APT31, planned attacks on US groups with Gemini's help. They gathered intel and tested attack ideas. Another, UNC795, fixed code bugs with it several times a week.

"This activity blurs the distinction between routine professional research and malicious reconnaissance." – Google Threat Intelligence Group

Phishing got slicker. Attackers built 'rapport' over chat turns before dropping bad links. Language barriers vanished as Gemini handled local dialects.

Malware and New Tricks

Google spotted malware called HONESTCUE that calls Gemini's API for fresh code. It grabs C# scripts, runs them in memory, and leaves no files behind. This beats basic virus scans.

In December 2025, a ClickFix scam popped up. Hackers made fake 'fix your computer' chats on Gemini and others like ChatGPT. They shared links via ads, tricking Mac users into pasting bad commands. The pages looked like helpful AI tips but installed ATOMIC malware.

APT42 and APT41 also used Gemini to debug malware and research exploits. One North Korean group, UNC1069, built crypto stealers and fake update prompts with it.

Google shut down linked accounts each time. Safety filters blocked most bad requests, but patterns showed clear abuse.

What This Means

Cyber defences need updates to catch AI-boosted attacks. Old signs like bad grammar no longer work. Teams must check for tailored stories and multi-step cons.

AI firms like Google now train models to spot and block misuse. They tweak safety rules based on these cases. Still, public chat shares on AI sites create new risks, as trusted pages host tricks.

For targets in defence, tech, and government, this means more work. Hackers move faster from research to hits. US orgs saw planning for attacks, though not all led to breaks-ins.

Broader use of AI in malware means scanners must watch web calls and memory loads. ClickFix shows how everyday tools turn into weapons. Users should double-check shared AI advice before running commands.

Governments track this too. State hackers from four nations now fold AI into ops, raising stakes for global nets. Companies build better intel sharing to stay ahead.

The shift puts pressure on AI makers. They balance open access with controls. As tools improve, so do threats, in a back-and-forth chase.

Author

  • Vincent K

    Vincent Keller is a senior investigative reporter at The News Gallery, specializing in accountability journalism and in depth reporting. With a focus on facts, context, and clarity, his work aims to cut through noise and deliver stories that matter. Keller is known for his measured approach and commitment to responsible, evidence based reporting.

Leave a Reply

Your email address will not be published. Required fields are marked *