США применили ИИ-решения в военной операции против президента Венесуэлы Translation: US Utilized AI Solutions in Military Operation Against Venezuelan President

The U.S. Army employed Anthropic’s Claude in a mission aimed at capturing Venezuelan President Nicolás Maduro, according to sources cited by the Wall Street Journal.

The operation involved bombarding several locations in Caracas.

Using the model for such purposes contradicts Anthropic’s public policy. The company’s guidelines explicitly prohibit the application of AI for violence, weapon development, or surveillance activities.

“We cannot comment on whether Claude or any other model was used in a specific operation—secret or otherwise. Any deployment of a large language model (LLM)—both in the private and public sectors—must comply with our policies regulating the use of neural networks. We work closely with our partners to ensure adherence to these rules,” stated an Anthropic representative.

Claude’s integration into the Department of Defense was made possible through Anthropic’s partnership with Palantir Technologies. The latter’s software is widely utilized by military agencies and federal law enforcement.

Following the raid, an Anthropic employee inquired with a Palantir colleague about the specific role the AI played in the Maduro capture operation, as reported by the WSJ. A representative from the startup noted that the company had not discussed the use of its models in particular missions “with any partners, including Palantir,” confining discussions to technical matters.

“Anthropic is committed to using advanced AI in support of U.S. national security,” the firm’s representative added.

Pentagon spokesperson Sean Parnell mentioned a review of relations with the AI laboratory.

“Our country needs partners willing to help our forces win any war,” he said.

In July 2025, the U.S. Department of Defense signed contracts worth up to $200 million with Anthropic, Google, OpenAI, and xAI for the development of AI solutions in the security sector. The department’s Digital and AI Technologies Directorate aimed to leverage these developments for creating security agent systems.

However, in January 2026, the WSJ reported a risk of terminating the agreement with Anthropic. Disagreements arose due to the startup’s stringent ethical policies. The rules prohibit the use of the Claude model for mass surveillance and autonomous lethal operations, which limits its application by agencies like ICE and the FBI.

Officials’ dissatisfaction grew amid the Pentagon’s integration of the Grok chatbot into its network. Defense Secretary Pete Hegseth emphasized that the department “will not use models that do not allow for combat.”

Axios reported that the Pentagon is pressuring four major AI companies to permit the U.S. Army to utilize their technologies for “all lawful purposes,” including weapon development, intelligence collection, and military operations.

Anthropic refuses to lift its restrictions on surveillance of U.S. citizens and the development of fully autonomous weaponry. Negotiations reached an impasse, but quickly replacing Claude is challenging due to its technological superiority in specific governmental tasks.

In addition to Anthropic’s chatbot, the Pentagon also employs OpenAI’s ChatGPT, Google’s Gemini, and xAI’s Grok for non-classified tasks. All three have agreed to ease restrictions imposed on regular users.

Currently, discussions are underway to transfer LLMs into a classified framework for use “for all lawful purposes.” One of the three companies has already consented to this, while the other two exhibit “greater flexibility” compared to Anthropic.

The United States is not the only country actively incorporating artificial intelligence into its defense sector.

In June 2024, China created an AI commander for large-scale military simulations involving all branches of the People’s Liberation Army (PLA). The virtual strategist possesses broad powers, quickly learns, and improves tactics during digital exercises.

In November, reports indicated that Chinese researchers adapted Meta’s Llama 13B model to develop a tool named ChatBIT. The neural network was optimized for intelligence data collection and analysis, as well as aiding in operational decision-making.

New Delhi has also placed its bets on AI as a driver of national security. The government has developed strategies and national programs, established special institutes and agencies to implement AI, and launched projects to apply the technology across various sectors.

London has conferred upon artificial intelligence the status of a priority direction. In its “AI Strategy for Defense” (2022), the ministry considers AI a key component of future armed forces. The “Strategic Defense Review” (2025) describes the technology as a foundational element of modern warfare.

While artificial intelligence was previously viewed as a supplementary tool in the military context, the British Armed Forces now plan to transform into “technologically integrated forces” where AI systems are expected to be utilized at all levels—from staff analysis to the battlefield.

In March 2025, the Pentagon announced plans to use AI agents for modeling conflicts with foreign adversaries.