The escalating military confrontation between the United States, Israel, and Iran since late February has marked a turning point in the deployment of artificial intelligence in warfare. For the first time at such scale, AI systems have been used to analyze intelligence, select targets, and guide thousands of airstrikes across Iranian territory. Yet this unprecedented technological integration has exposed critical gaps in oversight, raising fundamental questions about accuracy, accountability, and whether machines or humans ultimately bear responsibility for strikes that kill civilians.
Experts and military officials remain divided over how much control humans actually maintain over these automated systems, particularly as the technology evolves to process information faster than human decision-makers can verify it.
How AI Is Being Used in the Iran Campaign
Artificial intelligence is now embedded across multiple stages of modern military operations. The United States military relies on systems like Maven Smart System (MSS), built by defense contractor Palantir, which is designed to identify and prioritize potential targets from vast streams of satellite imagery, radar data, drone video, and electromagnetic signals.
According to recent reporting, Anthropic’s Claude AI model has been integrated with Maven to enhance detection and simulation capabilities, allowing military systems to process information at speeds previously impossible. France’s military AI director, Bertrand Rondepierre, explained the advantage: “AI algorithms allow us to move much faster in handling information, and above all to be more comprehensive.”
Israel has also deployed AI targeting systems in this conflict and prior campaigns. The “Lavender” programme, used during operations in Gaza, demonstrated AI’s ability to sift through enormous datasets to identify targets, though with acknowledged margins of error.
The Iranian School Strike and Accountability Questions
The bombing of an Iranian school, which Iranian authorities claim killed 150 people, has become a focal point in the debate over AI targeting accuracy. Neither the United States nor Israel has acknowledged responsibility for the strike. The facility was located near installations controlled by the Islamic Revolutionary Guard Corps (IRGC), Iran’s ideological military force.
This proximity raises a critical question: did AI systems fail to distinguish between a civilian school and a nearby military facility, or was the error made by human operators who relied on outdated data?
Peter Asaro, chair of the International Committee for Robot Arms Control, pointed out that accountability becomes murky when machines are involved. “If something does go wrong, then who’s responsible?” he asked. “Did they not distinguish it from the military base as they should have, but who is they: human or machine?”
The Data Problem
If AI systems were responsible for the school targeting, Asaro emphasized that the critical question becomes “how old was the data” used for targeting, and whether the error stemmed from a “database error” rather than algorithmic failure.
Israel’s mass surveillance capabilities in Gaza fed real-time intelligence to Lavender, enabling greater accuracy in a confined area. “It seems less likely that such a system has been set up in Iran,” noted Laure de Roucy-Rochegonde of France’s IFRI think tank, suggesting that strikes there may rely on less current information.
Human Control Versus Automation
Military officials insist that humans remain in control of targeting decisions. Rondepierre argued firmly that AI systems “operating without anyone being in control” remain “science fiction,” and that in France at least, “military commanders are at the heart of the action and the design of these systems.”
However, critics question this claim. As military AI systems process information exponentially faster than humans can analyze, the practical ability to maintain meaningful human oversight becomes increasingly strained.
The speed advantage of AI creates a paradox: the very capability that makes these systems attractive also compresses the time available for human review and judgment. This acceleration of the “kill chain” (the decision-making period between target detection and strike execution) may reduce rather than enhance accountability.
The Broader Pentagon-Anthropic Row
The deployment of AI in Iran strikes has coincided with a sharp dispute between the Trump administration’s Pentagon and Anthropic, the AI company behind the Claude model.
The Pentagon formally designated Anthropic as a “supply chain risk” to US national security, the first time a US company has received such a classification. The designation stems from Anthropic’s public stance that its technology should not be used for mass surveillance or fully autonomous weapons systems.
Anthropic CEO Dario Amodei vowed to challenge the designation in court, arguing that the Pentagon’s classification misrepresents the practical scope of the restriction and punishes the company for its ethical stance.
Why This Matters
The Pentagon row reflects deeper anxieties about AI’s role in warfare. Military applications of AI now span “logistics to reconnaissance, observation, information warfare, electronic warfare and cybersecurity,” according to defense analysts. As one expert noted, “almost any military function can be boosted with AI.”
Yet this expansion has occurred with minimal public debate about liability, accuracy standards, or what level of AI autonomy is acceptable in warfare.
Looking Ahead
Benjamin Jensen of the Center for Strategic and International Studies noted that global militaries have only begun to understand AI’s potential. “The world’s armies haven’t fundamentally rethought how we plan, how we conduct operations, to take advantage” of AI’s capabilities, he said. “It’s going to take a generation for us to really figure this out.”
The current crisis in Iran is likely just the opening chapter in AI’s role in military conflict. As systems become more sophisticated and integrated, the questions of human accountability, data accuracy, and the ethics of delegating life-and-death decisions to algorithms will only intensify.
Conclusion:
The deployment of artificial intelligence in strikes on Iran has revealed both the technological power and profound risks of automated military decision-making. While AI can process information faster than any human operator, the current crisis demonstrates that speed without accountability can lead to civilian casualties. The simultaneous clash between the Pentagon and Anthropic underscores that the ethical boundaries of military AI remain unsettled, contested, and ultimately unresolved.






