Skip to main content
opinion
Open this photo in gallery:

The sun sets behind a plume of smoke rising from a U.S.–Israeli military strike in Tehran on Tuesday.Vahid Salemi/The Associated Press

Gus Carlson is a U.S.-based columnist for The Globe and Mail.

It’s a tried-and-true plot line in science-fiction classics – intelligent machines turn the tables on the humans who created them and wage war to take over the world.

The recent U.S.-Israel attack on Iran reportedly used Anthropic’s AI model Claude and technology from the data-mining company Palantir. Some technology workers worry the increasing use of artificial intelligence by the military is pushing such a “Terminator” scenario out of the realm of fiction and closer to reality.

Employees at several major technology companies, including Alphabet and OpenAI, are demanding stricter limits on the U.S. military’s use of artificial intelligence in warfare, and more transparency regarding the work their employers do with the government – particularly around cloud and AI contracts.

Why even Iraq war hawks should oppose this war

Among the concerns are fears that weapons and the decision-making process directing them could eventually bypass human oversight completely. After all, that’s what advanced machine-learning systems are programmed to achieve, or at least aspire to achieve.

Despite the recent rift between the U.S. government and Anthropic, Claude – embedded in Palantir’s Maven Smart System – had reportedly been vital for operations related to Iran. The system shortened the “kill chain:” identifying targets, assisting in the approval process and launching a strike. The initial attacks on Iran were so fast, furious and plentiful the level of human involvement in the decisions behind them has been called into question.

Sure, humans may have approved the targets to be hit and weapons to be deployed, but with the volume of strikes and the sheer weight of complex information to be collected, interpreted and processed, some are asking if, too often, human oversight was simply a rubber stamp.

In the age of AI, the fog of war thickens

It’s a question that has been asked about the war in Ukraine, where some AI-powered drones are programmed to complete their combat missions even when they have lost contact with the humans controlling them.

It’s unclear how the U.S. government’s use of Claude squares with its decision to ban Anthropic just hours before the Iran attack for refusing to allow its technology to be used for mass surveillance and fully autonomous weapons. The administration gave the Pentagon six months to phase out Anthropic.

Even without Anthropic, however, there appear to be many companies willing to play ball. Google, for example, is in talks with the Pentagon about using its AI engine, Gemini, for classified military applications.

No Tech For Apartheid, a group that has objected to deals between the U.S. government and Big Tech, issued a joint statement titled, “Amazon, Google, Microsoft Must Reject the Pentagon’s Demands.”

The coalition wants companies to push back on Defense Department requirements that could enable mass surveillance or other abusive uses of AI. It also called for increased scrutiny on contracts involving the military and law enforcement agencies such as the Department of Homeland Security and Immigration and Customs Enforcement (ICE).

OpenAI has shown it cannot be trusted. Canada needs nationalized, public AI

If anyone knows the potential downsides of AI, it should be the tech workers who create it. So, there is some credibility and validity to their demands – and the ethical concerns behind them.

But they run counter to the realities of war, where ethics are often among the first casualties. In armed conflict, transparency is the enemy of confidentiality, so the idea that the government would share classified information with suppliers is naïve.

And an objective of war is to use every tool available to prevail – and the military has learned that AI is an enormously powerful tool in gaining advantage over adversaries. Limiting its use would put limits on the potential to win.

To be sure, the U.S. military has used AI systems such as image analysis in drone and video feeds for years. Now, military strategists are using large-language models such as ChatGPT and Claude in what they call “decision support systems” that do everything from identifying targets, assessing threats and sorting priorities.

The power of AI to collect, analyze and identify patterns across massive volumes of information – from satellite imagery to communications to logistics and even social media – is beyond even the smartest human’s capabilities.

Particularly in widespread conflicts across many fronts over long periods of time, the idea that machines can be more accurate and more resilient than human commanders and soldiers is a compelling benefit of AI.

How close are we to a “Terminator” scenario, where eventually humans are marginalized – or bypassed completely – by fully autonomous weapons systems that make their own decisions about who to kill and how to kill them? And are we getting a sneak preview of that future in Iran?

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe