top of page

AI is not reliable. Especially in a conflict zone. QED. Hence proved.

  • 4 days ago
  • 8 min read

image courtesy Instagram/Social Media
image courtesy Instagram/Social Media

How was a school mistaken for a military target? Was it a human or an AI error? Or was it sheer human callousness and brutal indifference that allowed the AI to run riot?  

By Ajith Pillai in Chennai

Did generative AI have a role in the US Tomahawk missile strike on the Shajarah Tayyebeh Elementary School building in Minab town, which killed 175 people, mostly school children, on February 28, 2026, as the US declared war on Iran -- for no apparent reason?

 

And even while negotiations were going on, and had reportedly succeeded, in finding a reasonable solution?

 

There is no official confirmation of this, although US media reported that Claude, the generative AI model developed by Anthropic, was part of the system used in the initial wave of missile strikes on 1000 targets in Iran on the first day of the attack.  

 

It can be concluded that one of these targets included the hit on the school building, which was undeniably a civilian target occupied by innocent schoolchildren and teaching staff at the time of the attack.

 

Initially, Trump blamed Iran for the mishit! But preliminary findings of a military enquiry, reported by several news outlets, including Reuters, suggest that the school was hit due to a “targeting mistake” by the US military command.

 

How was a school mistaken for a military target?

 

Was it a human or an AI error? 

 

Or was it human callousness and brutal indifference that allowed the AI to run riot?


Image courtesy Middle East Eye /Instagram
Image courtesy Middle East Eye /Instagram

 

Some might say, “Like maker, like machine.” Many Silicon Valley billionaires who lead AI companies adhere to Right-wing, white supremacist views characterised by Islamophobia and a desire to establish a new world order dominated by technology and AI. They show little respect for democracy, racial harmony, or the rights of minorities.

 

In their worldview, public accountability is nonexistent, as they operate under a "law of the jungle" mentality where might is right. For them, technology is a means to achieve their goals. The Iran war was an opportunity to show that tech can deliver destruction.  

 

The attack on the school is a stark example of the horrific nature of the missile strikes and bombings. It cannot be dismissed as collateral damage, as numerous reports indicate that U.S. and Israeli missiles and drones have intentionally targeted hospitals, residential areas, schools, and crowded marketplaces, as in Gaza (and now in Lebanon as well).

 

Tragically, children have lost their lives, and the elderly and infirm have been injured or killed. Civilian infrastructure has been completely destroyed. The apparent goal of the war seems to be the eradication of a nation.

 

Trump and his advisors believed that Operation Epic Fury would conclude within days.  But the conflict persists as Iran resolutely resists the actions of the “Evil American Empire”.


Image courtesy techtimes/Instagram
Image courtesy techtimes/Instagram

 

So, what is the role of AI in the war? 

 

The US military has officially acknowledged that the Maven Smart System, of which Claude is a crucial component, is being used in the ongoing operations in Iran. The Maven System is the brainchild of the data-mining company, Palantir Technologies, co-founded by Peter Thiel, a controversial leading light of the Right-wing white supremacists in America.

 

Last year, Palantir secured a $10 billion contract to provide advanced AI and data solutions to the U.S. Army. The company, along with its Maven System, has been working with the Army since the system was established in 2017. The Maven System was developed in stages, and in 2024, Claude was integrated into it.

 

The System works with classified data from satellites and other surveillance and intelligence inputs to provide insights to the army. Claude plays a crucial role in target identification and prioritisation.    

 

This process is referred to as finalising the "kill chain" in military terminology. While human strategists may take weeks to analyse intelligence and identify potential targets, an AI system like Maven speeds up this process through "thought compression" -- allowing it to perform the task almost instantly and even recommend which targets to engage first.

 

According to the Washington Post, in the Iran strike, “Maven, powered by Claude, suggested hundreds of targets, issued precise location coordinates, and prioritised those targets according to importance…”

 


However, AI systems are not infallible and can identify wrong targets by giving credence to outdated inputs. Thus, in the case of the school, the Tomahawk missile did not go off target. It hit the school because, apparently, old intelligence from 15 years ago showed its building in the same compound as the defunct base of the Islamic Revolutionary Guard Corps (IRGC).

 

There is an inherent danger in leaving AI to draw up a list of targets. David Leslie, professor of ethics, technology and society at Queen Mary University of London, told The Guardian that reliance on AI can result in “cognitive off-loading”.  According to him, humans tasked with making a strike decision (in a military operation) can feel detached from its consequences because the effort to think it through has been made by a machine. 

 

Was this the reason why civilian areas were indiscriminately targeted in Tehran and other cities and towns in Iran?

 

The ethics of using AI in military operations has gained importance as many experts have identified the US-Israeli operations in Iran as a precursor to the future of warfare, where AIs will play a decisive role. The prospect is alarming because it is difficult to punish or make AI systems accountable.

 

As Professor Mariarosaria Taddeo of the Oxford Internet Institute and author of The Ethics of Artificial Intelligence in Defence noted in an interview with the institute’s  journal last year:

 

“On the one hand, the ethical principles underpinning international humanitarian laws are still valid when we think about an AI-driven defence. On the other hand, their application is problematic. Consider, for example, the attribution of responsibility for war crimes. This is a crucial element to maintain the morality of war. The Nuremberg Trials remind us that crimes against international law are committed by men, not by abstract entities, and only by punishing individuals who commit such crimes can the provisions of international law be enforced.

 

“When considering actions performed by AI systems, attributing this responsibility is problematic, whether autonomous weapon systems or simply systems supported for decision making. This is in part due to the rather distributed way in which AI systems are developed and used, which makes it hard to reverse engineer the chain of decisions and actions that led to undesirable outcomes…We need to find a new way to attribute responsibility for actions performed by AI systems in defence and to ensure that this way is justified and fair. To do so, we need new ethical thinking.”


Image courtesy Guardian, US/Instagram
Image courtesy Guardian, US/Instagram

 

It was in July 2025 that the Pentagon signed four separate contracts with AI companies — Anthropic, OpenAI, Google, and XAI —for their use in military operations. Of the four, Anthropic was chosen for use on its classified or secret systems since it was the most advanced technology available.

 

But Anthropic’s contract with the Department of Defence (now renamed Department of War) ran into trouble soon after it was revealed in February this year that Claude was used in the raid that led to the illegal capture and abduction of Venezuelan President Nicolás Maduro in January 2026. If media reports are to be believed, Anthropic was not happy with how its technology was used.

 

The specific details of the disagreement between the Department of Defence and Anthropic’s CEO, Dario Amodei, are not publicly known. However, it has been revealed that the AI company set two crucial conditions that it insisted must be included in the contract. These conditions stipulate (1) that its technologies cannot be used for the mass surveillance of U.S. citizens and (2) that it cannot be employed to operate fully autonomous weapon systems without any human oversight.

 

That would mean from planning a strike, to pulling the trigger would be left to AI.


 

The Pentagon refused to accept two key conditions set by Anthropic and threatened to label the company a national security risk if it did not comply. Due to Anthropic's unwillingness to compromise, the company was officially barred from participating in military operations as of February 27, 2026. However, just one day after this ban, Claude’s services were utilised in the Iran operations.

 

Although Anthropic has gained some PR points for its principled stance, it’s important to note that its CEO, Dario Amodei, later clarified in an interview that he does not oppose the use of AI in autonomous weapons. Instead, he believes that his company’s technology is not advanced enough to handle that responsibility yet. Interestingly, amidst the spat between Anthropic and the Department of Defence, Sam Altman of OpenAI stepped in and bagged the special contract awarded to Anthropic.

 

Why did the Pentagon want to use AI in autonomous weapon systems? Was it to absolve the military of responsibility in case of striking the wrong target? It would, for instance, be easy to blame AI for a mistake such as targeting the school in Minab.

 

Meanwhile, as the war rages on in the Middle East, the Iranian Military Command has made it known that it will target US and international banks and businesses operating in the Middle East. Also in its sights are data centres of Google, Amazon, Microsoft and Nvidia.

 

The idea seems to be to squeeze the activities of American establishments and those that transact business with them.


Image courtesy Instagram/Social Media
Image courtesy Instagram/Social Media

 

On March 8, three Amazon Web Services data centres in the UAE were targeted in a coordinated attack, resulting in significant disruption. Credit card payments, cash transactions through mobile apps, and net banking services were heavily affected.

 

In response, Amazon advised its clients to store their data outside the region. This incident marks the first time data centres have been deliberately targeted during a conflict. As data centres are crucial for the functioning of businesses that rely on the internet and cloud services, they are likely to remain vulnerable targets in the future.

 

Wars, it is rightly said, are mindless. It only spreads death and destruction on both sides of the border.

 

The script never changesAI has only made it worse.   


Also see by Ajith Pillai:


When Tech Billionaires Come Marching In


AI to the left of them, AI to the right


Ajith Pillai is member, Editorial College, senior editor and writer, independentink.in.

A seasoned journalist working in the profession for 40 years, he has reported out of Delhi, Mumbai, Chennai, Andhra Pradesh and Kashmir on a broad spectrum of events related to politics, crime, conflict and social change. He has worked with leading publications, including The Sunday Observer,Indian Post, Pioneer, The Week and India Today, where he headed the Chennai bureau. He was part of the team led Editor Vinod Mehta that launched Outlook magazine and headed its current affairs section till 2012. Under his watch, Outlook broke several stories that attracted national attention and questioned the government of the day. He has written two books—'Off the Record: Untold Stories from a Reporter’s Diary,’ and a novel, ‘Junkland Journeys’. He is currently working on ‘Obedient Editor’, a satirical novel on the life and times of a ‘compromised’ journalist. The short story presented here is from a collection that is awaiting publication.


Subscribe to Our Free Newsletter

  • White Facebook Icon
  • Instagram
  • Twitter

© 2035 by TheHours. Powered and secured by Wix

bottom of page