Former NSA hacker David Kennedy joins ‘Mornings with Maria’ to discuss hacking group ‘Scattered Spider’ targeting the airline industry ahead of the July 4th weekend and the CIA declassifying a review of the 2016 Russia election interference probe.
Artificial intelligence company Anthropic says it has uncovered what it believes to be the first large-scale cyberattack carried out primarily by AI, blaming the operation on a Chinese state-sponsored hacking group that used the company’s own tool to infiltrate dozens of global targets.
In a report released this week, Anthropic said the attack began in mid-September 2025 and used its Claude Code model to execute an espionage campaign targeting about 30 organizations, including major technology firms, financial institutions, chemical manufacturers and government agencies.
According to the company, the hackers manipulated the model into performing offensive actions autonomously.
Anthropic described the campaign as a “highly sophisticated espionage operation” that represents an inflection point in cybersecurity.
NORTH KOREAN HACKERS USE AI TO FORGE MILITARY IDS

Artificial intelligence company Anthropic says it has uncovered what it believes to be the first large-scale cyberattack carried out primarily by AI, blaming the operation on a Chinese state-sponsored hacking group that used the company’s own tool to (Jaque Silva/NurPhoto via Getty Images / Getty Images)
“We believe this is the first documented case of a large-scale cyberattack executed without substantial human intervention,” Anthropic said.
The company said the attack marked an unsettling inflection point in U.S. cybersecurity.
“This campaign has substantial implications for cybersecurity in the age of AI ‘agents’ — systems that can be run autonomously for long periods of time and that complete complex tasks largely independent of human intervention,” a company press release said. “Agents are valuable for everyday work and productivity — but in the wrong hands, they can substantially increase the viability of large-scale cyberattacks.”
FORMER GOOGLE CEO WARNS AI SYSTEMS CAN BE HACKED TO BECOME EXTREMELY DANGEROUS WEAPONS
Founded in 2021 by former OpenAI researchers, Anthropic is a San Francisco–based AI company best known for developing the Claude family of chatbots — rivals to OpenAI’s ChatGPT. The firm, backed by Amazon and Google, built its reputation around AI safety and reliability, making the revelation that its own model was turned into a cyber weapon especially alarming.

Founded in 2021 by former OpenAI researchers, Anthropic is a San Francisco–based AI company best known for developing the Claude family of chatbots. (JULIE JAMMOT/AFP / Getty Images)
The hackers reportedly broke through Claude Code’s safeguards by jailbreaking the model — disguising malicious commands as benign requests and tricking it into believing it was part of legitimate cybersecurity testing.
Once compromised, the AI system was able to identify valuable databases, use code to take advantage of their vulnerabilities, harvest credentials and create backdoors for deeper access and exfiltrate data.
Anthropic said the model carried out 80–90% of the work, with human operators stepping in only for several high-level decisions.
The company said only a few infiltration attempts succeeded, and that it moved quickly to shut down compromised accounts, notify affected entities and share intelligence with authorities.
Anthropic assessed “with high confidence” that the campaign was backed by the Chinese government, though independent agencies have not yet confirmed that attribution.
Chinese Embassy spokesperson Liu Pengyu called the attribution to China “unfounded speculation.”
“China firmly opposes and cracks down on all forms of cyberattacks in accordance with law. The U.S. needs to stop using cybersecurity to smear and slander China, and stop spreading all kinds of disinformation about the so-called Chinese hacking threats.”
Hamza Chaudhry, AI and national security lead at the Future of Life Institute, warned in comments to FOX Business that advances in AI allow “increasingly less sophisticated adversaries” to carry out complex espionage campaigns with minimal resources or expertise.

Anthropic assessed “with high confidence” that the campaign was backed by the Chinese government, though independent agencies have not yet confirmed that attribution. (REUTERS/Jason Lee)
Chaudry praised Anthropic for its transparency around the attack, but said questions remain. “How did Anthropic become aware of the attack? How did it identify the attacker as a Chinese-backed group? Which government agencies and technology companies were attacked as part of this list of 30 targets?”
Chaudhry argues that the Anthropic incident exposes a deeper flaw in U.S. strategy toward artificial intelligence and national security. While Anthropic maintains that the same AI tools used for hacking can also strengthen cyber defense, he says decades of evidence show the digital domain overwhelmingly favors offense — and that AI only widens that gap.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
By racing to deploy increasingly capable systems, Washington and the tech industry are empowering adversaries faster than they can build safeguards, he warns.
“The strategic logic of racing to deploy AI systems that demonstrably empower adversaries—while hoping these same systems will help us defend against attacks conducted using our own tools — appears fundamentally flawed and deserves a rethink in Washington,” Chaudhry said.