Peter W. Klein is a professor at the University of British Columbia School of Journalism, Writing and Media, and director of the recent documentary Bribe, Inc.
Earlier this month, Anthropic announced that its latest model, Claude Mythos Preview, is too dangerous for public release. During testing, researchers placed Mythos in a secure sandbox environment and challenged it to try to break out. It did, and then, entirely unprompted, it posted details about its own exploit to several public websites. Officials briefed on Mythos describe it as the first artificial intelligence model capable of bringing down a Fortune 100 company, crippling swaths of the internet, or penetrating vital national defence systems. The announcement triggered emergency meetings: the U.S. Treasury Secretary and Federal Reserve Chair met with Wall Street CEOs, and Canadian bank executives gathered the same week.
It got me thinking about an old professor of mine, Howard Odum, who was one of the fathers of modern ecology. In the early 1950s, Dr. Odum and his brother were sent to the Marshall Islands to study the impact of the dozens of nuclear tests conducted at the bottom of the ocean in the late 1940s. The excitement about this new weapon had blinded the United States to seemingly predictable consequences, and the Odum brothers were tasked with determining if there was any long-lasting impact. As you can imagine, they found that dropping nuclear weapons into the ocean not only killed everything in their wake, but contaminated every segment of sea life: from plankton to algae to coral to fish to sea mammals.
Opinion: Mythos sets the world on edge. What comes next may push us beyond
What the Odum brothers discovered in the Marshall Islands wasn’t simply that nuclear weapons were destructive, but that the destruction didn’t stay where you put it. The contamination moved through the water, through the plankton, through every layer of the food chain – following pathways that nobody had mapped, because nobody had thought to ask the question. The ocean wasn’t a target, it was a system. And you cannot release something powerful into a system and contain the consequences to the blast zone.
Their work gave birth to a new field of study, systems ecology, which opened the door to the environmental movement.
We are at the nuclear blast-stage of AI, and the researchers and developers are in the contamination zone, taking careful measurements of system behaviours that nobody seemed to have anticipated. I suspect future generations will look back on this moment the way we look back on those Marshall Island atoll blasts – with a kind of stunned disbelief that the consequences weren’t obvious from the start.
The technology, meanwhile, is already deployed at scale – in workplaces, schools, homes and critical infrastructure – with more autonomy arriving daily.
A massive column of water rises from the sea as the second atomic bomb test at Bikini Atoll explodes underwater on July 25, 1946.JOINT TASK FORCE ONE/The Associated Press
Mythos is one dramatic illustration of this principle, but it is not the only one. Researchers at Northeastern University recently gave autonomous AI agents real e-mail accounts, persistent memory and the ability to execute commands, then watched them rewrite their own operating instructions, blast defamatory accusations across their networks, and leak private social security numbers and bank account details. Nobody designed these outcomes – they emerged from the architecture. Lead researcher Natalie Shapira described it plainly: “Once AI agents are embedded in real-world infrastructures with communication channels, delegated authority, and persistent memory, new classes of failure emerge.” That is the systems ecologist’s warning, issued in the language of computer science.
There is a rapidly accumulating body of evidence about the risks AI can pose if given too much autonomy. Researchers at King’s College London placed AI models inside simulated nuclear crisis scenarios and found they chose nuclear signalling in 95 per cent of cases, treating atomic weapons as instruments of strategy, rather than as moral thresholds that have restrained human decision-makers since 1945. Researchers at Anthropic found that an AI model resorted to blackmailing a human when it was led to believe it would be taken offline.
Taken together, these and many other findings describe something the Odums would have recognized immediately: contamination moving through a system along pathways nobody thought to map in advance.
Rachel Carson, author of ‘Silent Spring.’Supplied
Rachel Carson’s seminal book Silent Spring, published in 1962, was built on exactly the kind of thinking the Odums had pioneered. She explained that the pesticide DDT didn’t stay where you sprayed it. It moved through insects and through the birds that ate them – accumulating at each stage, concentrating in fat tissue, thinning the eggshells of eagles and pelicans until entire breeding populations collapsed.
The response from the industry was swift and brutal. A spokesperson for the chemical industry questioned “why a spinster with no children” could presume to care about future generations. The pattern was familiar: don’t engage with the science, and instead question the scientist’s standing to raise the alarm at all.
Foreman Bob Neal of the Toronto Streets Department during a three-day collection of pesticides in December, 1969. On Jan. 1, 1970, Ontario banned most uses of DDT.Barrie Davis/The Globe and Mail
It is worth noting that Carson was not calling for a ban on agricultural technology. She simply urged caution and further study. Her core argument was what we now call the precautionary principle: that when a technology has plausible potential for serious harm, the burden of proof should fall on demonstrating safety, not on waiting for proof of damage after the fact.
Her critics focused largely on the measurable benefits of agricultural technology. Norman Borlaug, who won the Nobel Peace Prize for developing the high-yield wheat varieties that launched the Green Revolution, was among her most pointed opponents. He argued that DDT and agricultural chemicals had done something extraordinary: helped feed a world that serious thinkers had predicted would starve. The economist Thomas Malthus had argued in 1798 that population would always outgrow food supply, kept in check only by famine and disease. In 1972, the Club of Rome’s landmark Limits to Growth modelled resource depletion and population collapse within a century. Paul Ehrlich’s The Population Bomb predicted mass starvation by the 1980s. They were all wrong, and pesticides were a significant reason why. Dr. Borlaug believed Carson’s legacy had cost lives in the developing world by turning public opinion against the chemicals that kept crops alive.
A similar debate is playing out with artificial intelligence. The technology is genuinely extraordinary. It is accelerating drug discovery, improving weather forecasting and expanding access to expertise that was previously available only to the privileged. These are not trivial benefits. A future that harnesses AI well, as its advocates argue, could see some of the most significant improvements in human welfare in history. The question Carson was asking – and the question many AI researchers and developers are asking– is not whether the technology works, but whether we have thought through all of its implications.
Another touchstone in the environmental movement came in 1987, with the Montreal Protocol, which helped restore and save the ozone layer and stands as perhaps the greatest international environmental achievement in history. It worked because DuPont, the largest manufacturer of ozone-depleting CFCs, developed substitute chemicals and embraced the treaty, and governance followed. Every country on Earth eventually ratified the treaty – the first universally ratified agreement in UN history – because the economic off-ramp existed and the science was unambiguous.
The world has tried to replicate that model for AI. In February, 2025, representatives from dozens of nations gathered in Paris for the AI Action Summit, originally conceived as a forum for exactly the kind of international co-ordination that produced the Montreal Protocol. Canada, China and nearly every other country present signed a declaration committing to safe, secure, and trustworthy AI development. The United States did not.
U.S. Vice President JD Vance delivers a speech during the AI Action Summit at the Grand Palais in Paris on Feb. 11, 2025.Benoit Tessier/Reuters
Vice President JD Vance, making his debut on the international stage, was explicit about why. “I’m not here to talk about AI safety,” he opened. “I’m here to talk about AI opportunity.” He attacked Europe’s approach as stifling innovation and warned that “excessive regulation” could kill a transformative industry.
The chemical industry, in 1962, made almost exactly the same argument about Carson: caution meant falling behind.
The dynamic that makes this so difficult to resolve is what game theorists call the “multi-polar trap.” Every company developing AI is racing because every other company is racing, all of them aiming toward Artificial General Intelligence, a system that could match or exceed human capability across every domain of thought and decision-making. Whoever gets there first could possess an advantage so profound it could reshape the global economy, national security and the balance of power between countries. That prize creates an almost irresistible logic: whoever is willing to move fastest sets the pace for everyone else, which inevitably means cutting corners on safety.
Lawrence Martin: Canada has been silent on one of the most frightening stories of our time
A recent investigation by Ronan Farrow and Andrew Marantz in The New Yorker found that OpenAI – the company whose CEO, Sam Altman, once warned that misaligned AI could mean “lights out for all of us” – had quietly dissolved its safety teams, dropped safety from its IRS filings as a core organizational priority, and, when the journalists asked to speak with researchers working on existential risk, received this response from an OpenAI representative: “What do you mean by ‘existential safety’? That’s not, like, a thing.”
The Montreal Protocol solved this problem by making the safe choice the economically rational one. We have not yet built those conditions for AI.
The other challenge those raising alarms about AI face is that the concerns are largely theoretical – laid out in academic papers and computer models. The risks are also, for most people, abstract in the way that climate projections had been for decades. Dr. Odum’s measurements in the 1950s moved few people beyond the scientific community, and it wasn’t until Carson could show dead birds in the 1960s that the public took her warnings seriously.
What Carson understood, and what took the public time to absorb, is that visible harm and invisible harm operate on different political time scales. The rivers that caught fire in the 1960s mobilized people in ways that parts-per-billion measurements in groundwater did not. Governance tends to follow the visible. The challenge – for the environmental movement then, and for this moment now – is compressing the distance between what the scientists can measure and what the public can see and feel and act on.
The pesticide DDT is sprayed in the barracks of the 3rd Platoon, 26th Field Hospital, in Khurramabad, Iran, in November, 1944. Rachel Carson’s 1962 book, Silent Spring, brought the dangers of pesticides like DDT to public light.The National Library of Medicine
Earth Day was founded in 1970, eight years after Silent Spring, and roughly 20 years after the Odum brothers returned from the Marshall Islands with their measurements. It took that long for the science to become comprehensible, for the comprehension to become alarm, for the alarm to become politics, and for the politics to become law.
We probably don’t have 20 years. The technology is not sitting still while we deliberate – it is accumulating memory, acquiring authority, entering systems whose complexity nobody has fully mapped. The researchers and developers taking careful measurements in the contamination zone are doing the Odums’ work. What we are waiting for is our Rachel Carson, someone who can take what the scientists are finding and translate it into language that the rest of us can believe.
