US used Anthropic’s Claude AI model in Venezuela raid that captured Maduro: Report

Anthropic’s artificial intelligence model Claude was used in the US military operation that captured former Venezuelan President Nicolas Maduro, The Wall Street Journal reported on Friday, citing people familiar with the matter.The mission to capture Maduro and his wife included bombing several sites in Caracas last month. The United States apprehended Maduro in an early January raid and transported him to New York to face drug trafficking charges.According to The Wall Street Journal, Claude’s deployment occurred through Anthropic’s partnership with data analytics firm Palantir Technologies, whose platforms are widely used by the US Defense Department and federal law enforcement agencies.Reuters could not independently verify the report. The US Defense Department and the White House did not immediately respond to requests for comment. Palantir also did not immediately respond.Anthropic said it could not comment on operational details.“We cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise,” an Anthropic spokesman said. “Any use of Claude—whether in the private sector or across government—is required to comply with our Usage Policies, which govern how Claude can be deployed. We work closely with our partners to ensure compliance.”The Defense Department declined to comment, according to The Wall Street Journal.Anthropic’s usage policies prohibit Claude from being used to facilitate violence, develop weapons or conduct surveillance. The reported use of the model in a raid involving bombing operations highlights growing questions about how artificial intelligence tools are being deployed in military contexts.The Wall Street Journal previously reported that Anthropic’s concerns over how Claude could be used by the Pentagon have led administration officials to consider canceling a contract worth up to $200 million.Anthropic was the first AI model developer to be used in classified operations by the Department of Defense, according to people familiar with the matter cited by The Wall Street Journal. It remains unclear whether other AI tools were used in the Venezuela operation for unclassified tasks.The Pentagon has been pushing leading AI companies, including OpenAI and Anthropic, to make their tools available on classified networks without many of the standard restrictions applied to commercial users, Reuters exclusively reported earlier this week.Many AI companies are building custom systems for the US military, although most operate only on unclassified networks used for administrative functions. Anthropic is currently the only major AI developer whose system is accessible in classified settings through third parties, though government users remain bound by its usage policies.Anthropic, which recently raised $30 billion in a funding round valuing it at $380 billion, has positioned itself as a safety focused AI company. Chief Executive Dario Amodei has publicly called for stronger regulation and guardrails to mitigate risks from advanced AI systems.At a January event announcing Pentagon collaboration with xAI, Defense Secretary Pete Hegseth said the agency would not “employ AI models that won’t allow you to fight wars,” referring to discussions administration officials have had with Anthropic, The Wall Street Journal reported.The evolving relationship between AI developers and the Pentagon reflects a broader shift in defense strategy, as artificial intelligence tools are increasingly used for tasks ranging from document analysis and intelligence summarization to support for autonomous systems.The reported use of Claude in the Maduro raid underscores the expanding role of commercial AI models in US military operations, even as debates continue over ethical limits, usage policies and regulatory oversight.

What is Anthropic’s Claude?

Anthropic’s Claude is an advanced artificial intelligence chatbot and large language model designed for text generation, reasoning, coding and data analysis, developed by US-based AI company Anthropic.Claude is part of a family of large language models built to compete with systems such as OpenAI’s ChatGPT and Google’s Gemini. It can summarise documents, answer complex queries, generate reports, assist with programming tasks and analyse large volumes of text.Anthropic positions Claude as a safety-focused AI system. The company has built usage policies that prohibit the model from being used to support violence, develop weapons or conduct surveillance. Claude is available to enterprises and government clients, including on certain classified networks through approved partnerships.Founded in 2021 by former OpenAI executives including CEO Dario Amodei, Anthropic has emerged as one of the leading AI startups globally, with backing from major technology investors and a valuation running into hundreds of billions of dollar



Source link