An artificial intelligence system developed by Anthropic was used by the US military during an operation in Venezuela aimed at abducting President Nicolás Maduro, the Wall Street Journal reported on Saturday.
The report marks one of the clearest examples yet of the US defence department deploying commercial AI tools in live military operations.
According to Venezuela’s defence ministry, the raid involved air strikes across the capital, Caracas, and resulted in the deaths of 83 people.
Anthropic’s AI model, Claude, is prohibited under the company’s terms of use from being applied to violent activity, weapons development or surveillance.
Anthropic is the first known AI developer whose technology has been linked to a classified US defence operation. It remains unclear how Claude was used during the raid. The system is capable of tasks ranging from analysing documents to assisting with autonomous drone operations.
A spokesperson for Anthropic declined to confirm whether Claude was involved, but said any use of its technology must comply with company policies.
The Wall Street Journal cited anonymous sources who said Claude was accessed through Anthropic’s partnership with Palantir Technologies, a long-standing contractor to the US military and federal law enforcement agencies. Palantir also declined to comment.
Militaries around the world are increasingly incorporating artificial intelligence into combat and intelligence systems. Israel’s armed forces have deployed autonomous drones and used AI to generate targeting data in Gaza. The US military has previously used AI-assisted targeting in strikes in Iraq and Syria.
Human rights groups and technology experts have warned that the use of AI in weapons systems risks fatal errors, particularly when automated tools play a role in determining targets.
AI companies have struggled to define the limits of their engagement with defence agencies. Anthropic’s chief executive, Dario Amodei, has repeatedly called for stronger regulation of artificial intelligence and has raised concerns about its use in autonomous lethal systems and domestic surveillance.
That cautious approach has reportedly frustrated parts of the US defence establishment. In January, the US secretary of war, Pete Hegseth, said the department would not “employ AI models that won’t allow you to fight wars”.
The Pentagon announced in January that it would work with xAI, owned by Elon Musk. It also uses customised versions of Google’s Gemini and OpenAI systems to support military research.

