Sports

Anthropic Proposed Claude AI for US Drone Swarm Challenge Amid Pentagon Dispute

News Mania Desk/ Piyal Chatterjee/ 4th March 2026

US-based artificial intelligence company Anthropic had proposed using its AI model Claude to help coordinate American drone swarms as part of a high-profile Pentagon competition, even as it remained locked in a policy disagreement with the US defence establishment over military use of AI systems.

The proposal was submitted under a $100 million innovation challenge launched by the United States Department of Defense, aimed at advancing autonomous drone swarm capabilities. The competition seeks to develop technologies that allow multiple unmanned aerial systems to operate in synchronised formations with enhanced responsiveness and battlefield efficiency.

According to reports, Anthropic’s pitch centred on deploying Claude as a decision-support interface that could translate spoken human commands into coordinated drone actions. The company’s plan emphasised human oversight, ensuring that the AI system would not independently select or engage targets. Instead, operators would retain control over critical decisions, with the model assisting in communication and operational alignment among multiple drones.

However, Anthropic was not selected in the initial round of the competition. Other technology players, including entities linked to OpenAI and Elon Musk’s xAI, advanced further in the programme. Some defence technology firms partnering with OpenAI were also reportedly chosen to move ahead in subsequent phases.

The development comes at a time when Anthropic has been in discussions with the Pentagon regarding the boundaries of AI deployment in military applications. Chief executive Dario Amodei has publicly stated that while the company supports lawful defence uses of AI, it opposes the use of its systems in fully autonomous lethal weapons without meaningful human control. The stance has reportedly created friction with defence officials seeking broader operational flexibility.

The episode highlights the growing tension between AI developers and military agencies over how emerging technologies should be integrated into national security frameworks. As governments accelerate efforts to harness AI for defence purposes, debates over ethical guardrails and human accountability are becoming central to the future of battlefield innovation.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button