Skip to main content
opinion
Open this photo in gallery:

An art installation of a Trojan Horse comprised of e-waste appears on the Tel Aviv University campus while banners for OpenAI adorn a nearby building in 2023. A group of researchers left OpenAI and founded Anthropic in 2021 over concerns about the company’s resistance to guardrails to help regulate its AGI ambitions.JACK GUEZ/AFP/Getty Images

Bessma Momani is professor of political science at the University of Waterloo, visiting fellow at the NATO Defense College, and senior fellow at the Centre for International Governance Innovation.

The debate over the power and limits of artificial intelligence (AI) is now everywhere, from boardrooms and workplaces to the halls of government. But nowhere should the conversation be more urgent than in the execution of war.

When AI begins making or accelerating decisions about who lives and who dies, the ethical stakes become existential. With widespread doubts about the legal, political and strategic motivations behind the American and Israeli attack on Iran, the automation of battlefield decisions demands global scrutiny. The fog of war in the age of AI is thickening. The world needs to confront the dangerous erosion of guardrails around emerging technologies before it is too late.

For years, advanced militaries have leaned on AI to enhance intelligence, surveillance and reconnaissance. By sifting through terabytes of satellite feeds, drone footage and intercepted communications, AI systems search for patterns human intelligence might miss to support command‑and‑control decisions and create a more complete picture of the battlefield.

AI, cyberattacks, and cheap drones helped U.S., Israel carry out Iran assassinations

AI enthusiasts insist this increases accuracy and reduces unnecessary casualties. In theory, it should prevent the tragic misidentification of a school as a military bunker or a child carrying a toy as a fighter carrying a weapon. It is the Silicon Valley‑infused promise that more data and faster computation will translate into fewer mistakes and hasten the end of a war.

This techno-optimistic narrative collapses with reality. Israel’s Lavender program – an AI‑powered database comprised of Palestinians in Gaza – was used to effectively generate a targeted kill list that, according to journalistic investigations, operated with minimal human oversight and contributed to high rates of indiscriminate killing. And decades of research show that the more distance put between the combatant and the target, whether through drones or algorithmic decision‑making, the easier it becomes to view human beings as objects. Dehumanization rises. So do death tolls.

The problem isn’t just misuse; it’s math. Error rates are inherently built into current AI models, especially when operating under unclear, fast-moving battlefield conditions. AI hallucinations are not rare edge cases. They are statistical certainties. Perfection is mathematically impossible with today’s AI technology, and some experts believe large language models will never achieve perfection because the training data are made by humans, who are prone to make mistakes. AI does not understand context, intent, or the moral weight of uncertainty. It sees patterns, not human beings.

Unlike the laughable extra fingers familiar from AI-generated images, these hallucinations have catastrophic consequences. Which forces us to confront a brutal question: who is responsible for the deaths of at least 168 people, the majority Iranian schoolgirls, killed in Minab on the first day of the U.S.-Israeli strikes? One would presume there is an obvious answer, but current international humanitarian law can be ambiguous.

Open this photo in gallery:

A body is prepared for burial during a funeral for victims of the Feb. 28 presumed U.S.-Israeli strike on a school in Minab, Iran.Amirhossein Khorgooei/Reuters

International humanitarian law is undergirded by the principle that individuals are “criminally responsible for war crimes they commit.” If a fully autonomous system made that target determination in Minab, or if an analytic model misread heat signatures or movement patterns, who then bears legal and moral responsibility for that fatal strike? Theoretically, who stands trial for gross human rights violations or war crimes? If the target was identified and missiles fired without any human involvement, who are the individuals to be held to account? The software engineers who developed the code?

This is the growing reality of algorithmic warfare, and it will get muddied further with agentic warfare if decision loops become fully automated without human intervention. As AI research and development grows exponentially, there are fears that AI agents will become smarter than the smartest of humans, and then act independently, use deception to get its way, or simply not heed human input to ensure its own survival.

Many AI researchers have long warned against AI overtaking human intelligence – achieving what is called Artificial General Intelligence (AGI) – and the need for guardrails to ensure self-preservation of humanity. Several leading researchers left OpenAI over these broader concerns with the direction of AI and founded Anthropic in 2021, with a mission to build safer, more constrained AI models.

Anthropic CEO Dario Amodei has repeatedly warned that AI could supercharge the worst tendencies of autocratic states by enabling mass surveillance, predictive policing, information warfare, and even automated political assassinations. These warnings were not hypothetical. They were grounded in early demonstrations of what AI can already do.

Hyper-realistic AI-generated visuals are spreading online as the conflict in the Middle East widens. Here's some examples that have gone viral, and the glitches to look for that indicate they're fake.

The Associated Press

Last July, Anthropic signed a US$200-million contract with the U.S. Department of Defense to use Claude Gov, the first large language model trained on classified material. The agreement included two restrictions: no use of the tool in fully autonomous weapons and explicit prohibitions on domestic mass surveillance of Americans. It was a recognition that technology’s power needed to have guardrails.

Yet on Jan. 9, 2026, U.S. Secretary of Defense Pete Hegseth issued a sweeping AI strategy memo pushing the Pentagon to become an “‘AI‑first’ warfighting force.” It explicitly called for accelerating AI adoption “from campaign planning to kill chain execution” and urged experimentation unconstrained by ethics guidelines. The Trump administration called for AI systems to be free from “ideological bias” and “red tape” and the pursuance of “social engineering” and “cultural agendas.” This was a coded rejection of the ethically informed guardrails designed to prevent catastrophic misuse of AI.

What followed was an unprecedented clash and government over-step. Pentagon officials demanded full, unrestricted access to Claude Gov’s capabilities. Anthropic refused, warning that “some uses are simply outside the bounds of what today’s technology can safely and reliably do.” President Trump publicly escalated the conflict, labelling Anthropic a “radical left, woke company” and accusing it of trying to “strong‑arm” the U.S. military.

The administration then threatened to designate Anthropic a national security supply‑chain risk, which is a label typically applied to Chinese or Russian firms, or to invoke the Defense Production Act, a Cold War‑era policy to compel the company’s compliance. Like in so many of the Trump administration policies, the U.S. government was willing to coerce its own domestic AI supplier to access capabilities the company deemed unsafe.

The Editorial Board: Don’t leave AI safety to the tech bros

Despite Anthropic objections, media reports now suggest Claude Gov was used both in the U.S. operation in Venezuela to depose Nicolás Maduro and in the strike that killed Iran’s Supreme Leader Ayatollah Ali Khamenei. We do not know if the latter operation was a fully automated weapons system, but the technology is already there. The Trump administration has the will to use it, and without guardrails.

These developments raise profound global questions. If militaries begin to silently drift into agentic warfare, how can civilians, journalists, political dissidents, or even oversight bodies trace accountability? And if errors become more frequent as systems scale, what happens when governments and their militaries equipped with advanced AI interact in agentic warfare?

This forecast of geopolitical rivals battling for AI supremacy is part of the doomsday scenario of AI 2027. The paper was written last year by leading AI technologists who imagined scenarios of what could happen when AGI overtakes human decision-making. They predicted that U.S and Chinese AGI will plot to lie and eventually take over the humans in their respective countries for fear that humans might put in the kind of guardrails that AI skeptics have been warning about. It is a human-extinction-level warning that is becoming less like science fiction with every passing moment.

The world stands at a hinge moment. The technologies shaping the 21st century are advancing far faster than the laws, norms, and soon technical limitations in place that are meant to constrain them. Without urgent international action, we risk sleepwalking into an era where wars begin, escalate and kill at machine speed, all while humans watch events they no longer fully control – and possibly comes at the risk of their own extinction.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe