US used Anthropic’s Claude AI model to catch Maduro in Venezuela raid: report

Published:

US used Anthropic's Claude AI model to catch Maduro in Venezuela raid: report

Anthropic’s artificial intelligence model Claude was used in the U.S. military operation to capture former Venezuelan President Nicolás Maduro, the Wall Street Journal reported on Friday, citing people familiar with the matter.Last month, the mission to capture Maduro and his wife included bombing multiple locations in Caracas. The United States arrested Maduro in a raid in early January and sent him to New York to face drug trafficking charges.

US tests ‘secret’ weapons in Venezuela: Maduro aide’s bombshell ‘artificial intelligence-assisted systematic bombing’

According to the Wall Street Journal, Claude’s deployment is happening through Anthropic’s partnership with data analytics company Palantir Technologies, whose platform is widely used by the U.S. Department of Defense and federal law enforcement agencies.Reuters could not independently verify the report. The Department of Defense and the White House did not immediately respond to requests for comment. Palantir did not immediately respond.Anthropic said it could not comment on operational details.“We cannot comment on whether Claude or any other AI model is used in any specific operation, classified or otherwise,” an Anthropic spokesperson said. “Any use of Claude, whether in the private or government sector, must comply with our usage policy, which governs how Claude is deployed. We work closely with our partners to ensure compliance.”The Defense Department declined to comment, The Wall Street Journal reported.Anthropic’s usage policy prohibits Cloud from being used to promote violence, develop weapons, or conduct surveillance. The model’s reported use in raids involving bombing operations highlights growing questions about how artificial intelligence tools are deployed in military settings.The Wall Street Journal previously reported that Anthropic’s concerns about how the Pentagon was using Crowder led administration officials to consider canceling a contract worth up to $200 million.Anthropic is the first developer of artificial intelligence models to be used by the Department of Defense for classified operations, the Wall Street Journal reported, citing people familiar with the matter. It is unclear whether other artificial intelligence tools were used in the Venezuela operation to perform unclassified tasks.The Pentagon has been pushing leading artificial intelligence companies such as OpenAI and Anthropic to make their tools available on classified networks without imposing many of the standard restrictions on commercial users, Reuters reported exclusively earlier this week.Many AI companies are building custom systems for the U.S. military, although most operate only on unclassified networks used for administrative functions. Anthropic is currently the only major AI developer that has third-party access to its systems in a classified environment, although government users are still bound by its usage policy.Anthropic, which recently raised $30 billion in a funding round at a $380 billion valuation, positions itself as a security-focused AI company. CEO Dario Amodei has publicly called for greater regulation and guardrails to mitigate the risks of advanced artificial intelligence systems.At a January Pentagon event announcing its partnership with xAI, Defense Secretary Pete Hegseth said the agency would not “adopt an AI model that doesn’t allow for war,” according to the Wall Street Journal, referring to discussions between administration officials and Anthropic.The evolving relationship between AI developers and the Pentagon reflects a broader shift in defense strategy, as AI tools are increasingly used for tasks ranging from document analysis and intelligence summarization to supporting autonomous systems.Crowder’s reported use in Maduro’s attacks highlights the expanding role of commercial artificial intelligence models in U.S. military operations, even as debate continues over ethical limits, usage policies and regulatory oversight.

What is Anthropic’s Claude?

Anthropic’s Claude is an advanced artificial intelligence chatbot and large-scale language model designed for text generation, reasoning, coding and data analysis, developed by American artificial intelligence company Anthropic.Claude is part of a large family of language models designed to compete with systems like OpenAI’s ChatGPT and Google’s Gemini. It can summarize documents, answer complex queries, generate reports, assist with programming tasks and analyze large amounts of text.Anthropic positions Claude as a safety-focused artificial intelligence system. The company has a usage policy prohibiting the model from being used to support violence, develop weapons or conduct surveillance. Cloud serves enterprise and government clients, including on certain classified networks through approved partnerships.Founded in 2021 by former OpenAI executives including CEO Dario Amodei, Anthropic has become one of the world’s leading artificial intelligence startups, backed by major technology investors and valued at hundreds of billions of dollars

WEB DESK TEAM
WEB DESK TEAMhttps://articles.thelocalreport.in
Our team of more than 15 experienced writers brings diverse perspectives, deep research, and on-the-ground insights to deliver accurate, timely, and engaging stories. From breaking news to in-depth analysis, they are committed to credibility, clarity, and responsible journalism across every category we cover.

Related articles

Recent articles

spot_img