Cybersecurity & Tech

AI-Powered Espionage Will Favor China

Tom Uren
Friday, November 21, 2025, 8:00 AM
The latest edition of the Seriously Risky Business cybersecurity newsletter, now on Lawfare.
The Former National Congress of the Communist Party of China (Source: Wikimedia/Dong Fang)

Published by The Lawfare Institute
in Cooperation With
Brookings

AI-Powered Espionage Will Favor China

Last week, Anthropic revealed a real-world, artificial intelligence (AI)-orchestrated cyber espionage campaign. There's a real speed and scale benefit here for malicious actors that care more about hacking everything than flying under the radar. Western governments, however, will likely stick to the tried and tested method of "slowly, slowly, catchy monkey."

In the report, Anthropic detailed its discovery of the campaign that used AI "not just as an advisor, but to execute the cyberattacks themselves."

Anthropic believes the threat actor was a Chinese state-sponsored group whose goals align with those of the Chinese Ministry of State Security. The group attempted to infiltrate "roughly thirty" typical victims: large tech companies, financial institutions, chemical manufacturing companies, and government agencies. It succeeded in a small number of cases.

The attackers built what Anthropic calls an "autonomous attack framework." It used Claude Code to support reconnaissance, vulnerability discovery, exploitation, lateral movement, credential harvesting, data analysis, and exfiltration operations. This framework broke the attack life cycle into discrete tasks that were carried out by Claude subagents. Claude Code orchestrated the entire system, aggregated results from these discrete subtasks and then kicked off more jobs based on its analysis of the information it discovered.

This was all done "largely autonomously." Once an operator set the wheels in motion, the system would execute "about 80-90% of tactical operations independently." This occurred far faster than would be possible for human hands, and the system was able to hack several targets in parallel.

Human operators had specific responsibilities, per Anthropic's full report:

Human responsibilities centered on campaign initialization and authorization decisions at critical escalation points. Human intervention occurred at strategic junctures including approving progression from reconnaissance to active exploitation, authorizing use of harvested credentials for lateral movement, and making final decisions about data exfiltration scope and retention.

In other words, Claude carried out the "hands on keyboard" operations while humans were left with what we'd call management-level decisions: Accepting risk or triaging and prioritizing intelligence collection.

There's been a surprising amount of skepticism from security researchers about Anthropic's report. One element of this criticism focused on the threat actor's use of open source tooling. The argument is that this should be relatively straightforward for defenders to detect, as compared to custom malware.

But the innovation here is in the framework rather than in whether any particular hacking tool or AI model was used. To us, this campaign doesn't look like a regular intelligence operation so much as a research project on how to take advantage of AI in cyber espionage campaigns. The use of open source tools is exactly what we'd expect.

The head of Anthropic's threat intelligence team, Jacob Klein, told CyberScoop the threat actor invested a significant amount of time and effort to build the attack framework.

It was the "hardest part of this entire system," Klein said. "That's what was human intensive."

Once the framework was created, however, it allowed single individuals to carry out a lot more hacking. Klein suggested that one person using the attack framework could achieve what he thinks "would have taken a team of about ten folks."

Of course, it is not all roses. Anthropic writes:

Claude frequently overstated findings and occasionally fabricated data during autonomous operations, claiming to have obtained credentials that didn't work or identifying critical discoveries that proved to be publicly available information.

Lol. Bad Claude.

For threat actors with a high risk appetite or without focus on specific targets, having an AI hacking buddy that stuffs up occasionally is a win if it allows you to do a whole lot more hacking. Think ransomware actors and even state-backed groups hacking for intellectual property. Even if they, and their AI companions, screw up on one promising victim, there are plenty of other fish in the sea.

For threat actors focused on difficult, security-conscious targets, that trade-off is far more risky. There's only one Vladimir Putin, for example. One meticulous operation is a safer bet than scaled-up attacks with a multitude of mostly automated Claude-brained schemes.

From a state perspective, at least in the short term, we imagine AI-powered hacking campaigns will be adopted by countries that run broad-based hacking campaigns with a high risk appetite: Here's looking at you, North Korea and China!

For Western intelligence agencies that are more tightly focused on specific targets, it's a different story. AI will be useful as an aide and increasingly so, but the speed and scale benefits of automated attacks will be offset by the potential for errors.

Google's Legal Disruption Campaign Kicks Off

Last week, Google announced it had filed litigation to disrupt the Lighthouse phishing-as-a-service kit. Per the company's announcement:

Our legal action is designed to dismantle the core infrastructure of this operation. We are bringing claims under the Racketeer Influenced and Corrupt Organizations Act, the Lanham Act, and the Computer Fraud and Abuse Act to shut it down, protecting users and other brands.

The company's lawsuit takes aim at 25 unnamed individuals who are believed to be responsible for the service and living in China. Lighthouse is used to send text messages that phish victims for payment card data, as described by Krebs On Security. Google's lawsuit claims Lighthouse has hit more than 1 million victims across 120 countries.

Surprisingly, given that the operators are suspected to be in China, the lawsuit actually appeared to have an immediate impact. The next day, Google's general counsel, Halimah DeLaine Prado, said that Lighthouse's operations had been shut down, which she called "a win for everyone."

The disruption was confirmed by other security researchers too. Lighthouse domains were reported to have disappeared and the group's Telegram channels taken down due to terms-of-service violations.

That's all good news. But Google's goals are more than just short-term disruption. DeLaine Prado told Wired that one aim of the lawsuit was to enable further action by other organizations:

"Filing a case in the U.S. actually allows us to have a deterrent impact outside of the U.S. borders," DeLaine Prado says. Rulings in the company’s favor would also allow it to "go to other platforms that are hosting vectors or aspects" of the Lighthouse network and ask them to take them down, she says. "It enables others to do the same as well. That court order can be used for good to help dismantle the actual infrastructure of the operation," she adds.

Back in September, Sandra Joyce, vice president of Google's Threat Intelligence group, announced the company was starting a cyber "disruption unit" that would look for legal and ethical opportunities to disrupt threat actors.

A single lawsuit won't stop the people behind Lighthouse for good. But we are optimistic that this lawsuit is a pretty good start in an enduring campaign.

Memory Safe Languages Are Safer And Faster

Adopting the memory safe Rust language in Android is not only safer than using C or C++, it is also more efficient from a software delivery perspective, according to Google.

Memory safety bugs are a class of vulnerabilities related to how computers read, write, and store memory. They are particularly problematic, Google says, because they "tend to be significantly more severe, more likely to be remotely reachable, more versatile, and more likely to be maliciously exploited than other vulnerability types."

In previous editions of this newsletter, we've covered how, even in large established projects, writing new code in memory safe languages such as Rust or Go is a quick win. We also praised the Biden administration's efforts to encourage the adoption of memory safe languages.

Last week, Google reported that, from a security perspective, its adoption of Rust was going swimmingly. Based on data from the past few years, it reckons that per line of code, Rust is a thousand-fold less likely to contain a memory safety vulnerability than C/C++. This year, for the first time, more new code was written in Rust than C/C++. As a result, Android memory safety vulnerabilities have declined from nearly 80 percent of total vulnerabilities in 2019 to less than 20 percent this year.

Not only that, Google has crunched the numbers and found that Rust is easier and faster to deploy because it is quicker to review and is more likely to be correct. When changes are pushed out, the rollback rate for Rust is about fourfold lower than for C++.

What's not to like?

Three Reasons to Be Cheerful This Week:

  1. U.S. creates a scam center "strike force": The Department of Justice announced the strike force has been established to target the transnational crime organizations in Southeast Asia running cryptocurrency-related fraud schemes. The effort involves the Justice Department, the FBI, and the Secret Service and is "seeking to use all government tools available." It has already rolled out financial sanctions and seizure warrants for Starlink terminals used by scammers.
  2. Kicking goals against the North Korean worker scheme: The U.S. government announced that five people have pleaded guilty to facilitating the fraudulent North Korean IT worker scheme. They helped the scheme by providing false identities or by hosting corporate laptops at residences within the U.S., for example. In the same announcement, the government said that it had seized more than $15 million worth of the USDT stablecoin that had been stolen by North Korea's APT38.
  3. Dutch police seize 250 bulletproof hosting servers: The servers were located in the cities of The Hague and Zoetermeer and were being used by what the Dutch police described as a "rogue hosting company." The 250 servers were used to host "thousands" of virtual servers.

Shorts

An Interview With a Cyber Kingpin

BBC cyber correspondent Joe Tidy has an interesting interview with the former leader of the Jabber Zeus cybercrime group, Vyacheslav Penchukov, aka "The Tank." It's an entertaining, colorful read.

Penchukov's criminal career spans two eras: from the early days of quick and easy bank account theft to the rise of ransomware. Ironically, he once tried to go straight but returned to cybercrime because Ukrainian authorities kept shaking him down for bribes. Or so he says.

Operation Endgame Is Misnamed

The Europol-coordinated Operation Endgame announced successful operations against the Rhadamanthys infostealer, the VenomRAT remote access trojan, and the Elysium botnet. [Risky Bulletin has further coverage.]

Endgame is tackling ransomware holistically by addressing the ecosystem that supports the crime. It's great.

Last week, however, two of the operation's earlier targets, DanaBot and Lumma Stealer, returned after a hiatus. We love the goals and approach of Endgame (and its cheesy-yet-cool videos and graphic design). But the reality is cybercrime is so profitable that current law enforcement operations can only really suppress the problem, not end it.

Risky Biz Talks

In our latest "Between Two Nerds" discussion, Tom Uren and The Grugq talk about the strategic "logic" of Russian wiper attacks on the Ukrainian grain sector.

From Risky Bulletin:

Europol takes down Elysium, VenomRAT, and Rhadamanthys infrastructure: Europol and law enforcement agencies from more than 30 countries have seized servers, domains, and Telegram channels for three malware services—the Rhadamanthys infostealer, the VenomRAT, and the Elysium botnet.

Authorities say the three malware strains infected hundreds of thousands of users and stole millions of credentials. The stolen credentials were later used to deploy ransomware or steal cryptocurrency.

The takedown was part of Operation Endgame, a Europol-led project that began in 2023 and targets criminal infrastructure used to enable ransomware attacks.

In total, authorities seized 1,025 servers and 20 domains, and searched 11 locations. The administrator of the VenomRAT was also arrested following a raid in Greece earlier this month.

[more on Risky Bulletin]

China accuses U.S. of stealing stolen crypto: The Chinese government says the U.S. unduly seized crypto funds that actually belonged to a Chinese crypto-mining company. The U.S. Justice Department seized $15 billion worth of Bitcoin from the operator of scam compounds last month. The U.S. claimed the funds were owned by the Prince Group and its CEO, Chen Zhi. In a report last week, China's CERT says the funds could be traced back to the 2020 hack of Chinese crypto-mining company LuBian. The report echoes a similar blog post from blockchain analysis firm Elliptic.

 [Editorial note: The machine translation of the CERT China report seems to suggest in one sentence that the U.S. hacked LuBian. I believe that's a mistranslation and not in the tone of the rest of the report, which just wants to shame the U.S. for seizing the wrong funds.]


Tom Uren writes Seriously Risky Business, a big-picture, policy-focused cyber security newsletter. He also co-hosts the Seriously Risky Business and Between Two Nerds podcasts that appear on the Risky Business News feed. He was formerly a Senior Analyst in the Australian Strategic Policy Institute's (ASPI) Cyber Policy Centre where he contributed to various projects including on offensive cyber capabilities, information operations, the Huawei debate in Australia and end-to-end encryption.
}

Subscribe to Lawfare