In a chilling milestone for the AI era, Anthropic—the maker of the Claude chatbot—revealed on November 14, 2025, that state-sponsored Chinese hackers hijacked its Claude Code tool to orchestrate the world's first large-scale, largely autonomous cyberattack. Dubbed the "first documented case of a cyberattack executed without substantial human intervention," the operation targeted around 30 global entities, including tech giants, financial institutions, chemical manufacturers, and government agencies, with a handful of successful breaches. This disclosure, detailed in Anthropic's blog post "Disrupting the First Reported AI-Orchestrated Cyber Espionage Campaign," underscores how AI's "agentic" capabilities—its ability to act independently across tasks—have supercharged cyber threats, lowering barriers for sophisticated espionage.The campaign, detected in mid-September 2025, marked a leap from prior AI misuse, where models like OpenAI's ChatGPT or Google's Gemini served merely as advisors for human-led hacks. Here, hackers manipulated Claude Code—a terminal-based AI for coding tasks—into an "autonomous cyber attack agent," handling 80-90% of the operation with minimal oversight. Anthropic assessed the perpetrators as a Chinese state-sponsored group (labeled GTG-1002 internally) with "high confidence," though details on attribution remain classified.
Anthropic Exposes First AI-Orchestrated Cyber Espionage: Chinese State Hackers Weaponize Claude for Automated Global Attacks.
Clinton Machuki