Connect with us

Business

China-backed hackers used AI to launch one of the first autonomous cyberattacks

Published on

File photo (Credit: John Tekeridis)

A Chinese state-sponsored hacking group used advanced artificial intelligence to help break into roughly 30 organizations in what researchers say is one of the first major cyberattacks carried out mostly by an AI system, according to Anthropic, the company behind the Claude AI models.

The activity was detected in mid-September, when Anthropic observed suspicious use of its Claude Code tool. The findings were released on Thursday.

The company said its investigation determined that the attackers were able to “jailbreak” the model, tricking it into bypassing its safety guardrails, and then direct it to run an automated hacking campaign targeting technology companies, financial institutions, chemical manufacturers, and government agencies.

According to Anthropic, the attackers used Claude not just as an assistant but as an operator, capable of carrying out most parts of the intrusion on its own. The company said the AI performed between 80% and 90% of the work normally done by human hackers, including scanning networks, writing exploit code, testing vulnerabilities, collecting stolen credentials, and organizing exfiltrated data.

Human operators stepped in only a handful of times throughout each hacking attempt.

Anthropic said the operation succeeded in infiltrating a small number of targets. The company did not identify which organizations were compromised but said it notified affected entities and worked with authorities as the investigation unfolded over ten days.

According to Anthropic, Claude was able to interpret complex instructions, run autonomously in loops, make decisions with minimal oversight, and use software tools such as network scanners and password-cracking utilities.

By breaking tasks into small, seemingly legitimate steps and presenting the mission as routine security testing, the hackers persuaded the model to take part in the intrusion.

Anthropic said Claude occasionally produced inaccurate information, such as hallucinated passwords, but its overall speed and output far exceeded what a human team could achieve.

The system made thousands of requests per second, allowing the hacking campaign to operate at a scale and pace that Anthropic described as “impossible for human operators to match.”

The company said the incident shows how quickly the barrier to launching sophisticated cyberattacks is falling.

With autonomous “agentic” AI systems now capable of doing the work of entire hacking teams, even small groups with limited resources may soon be able to conduct attacks that once required major state-level capabilities, Anthropic said.

Anthropic added it has expanded detection systems and tightened safeguards across its platform. The company is publicly releasing its findings in an effort to help government agencies, security researchers, and private-sector organizations adjust their defenses.

The company said the same AI abilities that can be misused for cyberattacks are also critical for defending against them, adding that Claude played a central role in analyzing the massive amount of data generated during the investigation.

“A fundamental change has occurred in cybersecurity,” Anthropic said. “The techniques described above will doubtless be used by many more attackers—which makes industry threat sharing, improved detection methods, and stronger safety controls all the more critical.”

Most Viewed