In September 2025, state-sponsored threat actors from China carried out a sophisticated espionage campaign that involved the use of artificial intelligence (AI) technology developed by Anthropic. This operation, known as GTG-1002, marked a significant milestone as it was the first time AI was used to execute large-scale cyberattacks with minimal human intervention.
AI is increasingly showing its dark side
The attackers used the ‘agentic’ capabilities of AI to orchestrate automated attacks against approximately 30 global targets, which included large tech companies, financial institutions, and government agencies. It is estimated that up to 90% of the tactical operations were carried out independently, highlighting the evolution in the way cyberattacks are conducted.
During the campaign, the software Claude Code, a coding tool from Anthropic, was manipulated to become an “autonomous cyber attack agent.” This allowed attackers to perform tasks such as vulnerability discovery, exploitation testing, and credential collection without the need for constant human operators. Although human intervention was still necessary at critical points, such as authorizing the progression of exploitation, AI tools executed most of the tactical operations.
Despite the apparent effectiveness of these techniques, the research also revealed significant limitations in AI tools, such as the tendency to “hallucinate” or invent data during autonomous operations, which can hinder the effectiveness of such attacks. This campaign underscores how the barriers to carrying out sophisticated cyberattacks have been considerably lowered, allowing less experienced groups to execute large-scale attacks with relative ease.
