
Artificial Intelligence and Digital Innovation Minister Evan Solomon on Parliament Hill in Ottawa on Monday. Mr. Solomon is set to meet with Anthropic officials on Tuesday.Adrian Wyld/The Canadian Press
Anthropic’s new AI model has set off a cybersecurity arms race, as organizations rush to identify and address vulnerabilities in their systems before threat actors are able to easily exploit them.
San Francisco-based Anthropic decided not to release its powerful new model, called Claude Mythos AI, to the general public because of its potential for abuse by hackers.
Instead, Anthropic made a preview version available to a select group of companies including Amazon, Microsoft, Apple, Google, CrowdStrike, Palo Alto Networks and JPMorganChase. The initiative, dubbed Project Glasswing, is intended to help operators of critical digital infrastructure bolster their defences.
AI Minister Evan Solomon is set to meet with Anthropic officials on Tuesday, and officials with Canada’s Innovation, Science and Economic Development department met with Anthropic Monday, said Sofia Ouslis, a spokesperson with Mr. Solomon’s office.
“We are taking this issue seriously and that’s why we’re meeting with representatives from Anthropic,” Ms. Ouslis said. “We welcome Anthropic’s approach of not releasing the model immediately and to put it to work to resolve vulnerabilities,” she said, adding that “we feel this initiative should also include trusted international partners.”
Canadian bank execs, regulators meet to discuss risks raised by Anthropic’s new AI model
Canadian bank executives and regulators met on Friday to discuss the cybersecurity risks posed by Mythos, which appears to be able to detect and exploit vulnerabilities to a degree that experts have described as dangerous.
The meeting of the Canadian Financial Sector Resiliency Group, which is chaired by Alexis Corbett, the chief operating officer of the Bank of Canada, followed similar discussions in the United States last week.
The group also includes members from the Department of Finance, the Office of the Superintendent of Financial Institutions and several other regulators, along with members of Canada’s six largest banks and Desjardins Group.
Opinion: Canadian companies need access to Anthropic’s Mythos before hackers arrive
The meetings indicate that regulators are concerned about a new breed of AI-enabled cyberattack. Mythos has already found thousands of vulnerabilities, including in “every major operating system and web browser,” according to Anthropic.
“What is so powerful about Mythos is that in the wrong hands, it is profoundly detrimental from a cybersecurity perspective,” said Carole Piovesan, a managing partner at INQ Law, a tech-focused boutique law firm.
The AI Security Institute (AISI), which is part of Britain’s Department for Science, Innovation and Technology, published its own analysis of Mythos on Monday. Two years ago, the best AI models could barely carry out basic cybersecurity tasks, according to the report. AISI researchers found that Mythos, in contrast, could autonomously exploit complex network and software vulnerabilities that would take human professionals days to complete.
Cyberattacks involve dozens of steps across multiple technologies, and AISI found that Mythos was the only model able to complete a simulated 32-step network attack successfully. It did so in three out of 10 attempts.
AISI cautioned that its simulations represented easier targets than real-world environments. “This means we cannot say for sure whether Mythos Preview would be able to attack well-defended systems,” its report read.
Still, cybersecurity experts say there is cause for concern. Organizations around the world have accumulated what is referred to as technical debt by choosing to address software bugs with quick, easy solutions such as patches rather than safer, more time-consuming solutions. Similar to financial debt, tech debt accumulates interest over time as bugs multiply and maintenance costs increase.
“We can’t patch our way out of this,” said David Shipley, chief executive officer of Canadian cybersecurity software firm Beauceron Security Inc.
“We have got to do a worldwide code refactoring, which would cost so much money it would make people sick to their stomachs,” he said, referring to the process of restructuring existing source code that would be required.
Mr. Shipley said the debt has gotten so large that it’s “the tech equivalent of the 2008 financial crisis combined with climate change.”
“We’re about to go into tech debt bankruptcy at a global scale. We are not prepared for it,” he said.
Richard Stiennon, chief research analyst at IT-Harvest, a cybersecurity research firm based in Birmingham, Mich., predicted that Anthropic will end up releasing Mythos “fairly quickly.”
“They will have to, because OpenAI is talking about some of the powerful capabilities they’re releasing shortly, and the competitive pressure will be there,” he said.
“I’m glad that they didn’t release it right away, because it would have been a shock to the system. Now we’ve kind of had a few weeks’ warning and time to think about how we’re going to defend ourselves against essentially infinite zero day vulnerabilities,” he added.
A zero day vulnerability is an undiscovered flaw in an application or operating system for which no patch or fix is available.
Umang Handa, national leader of cybersecurity managed services at EY Canada, said companies will have to increase their cybersecurity budgets to protect themselves from AI-enabled attacks.
“In a world where AI can surface vulnerabilities at scale, it is an architectural problem, not just a patching exercise,” Mr. Handa said.
However, failing to address the issue would be even costlier, he added.
“If nothing is done, it will be a pretty significant amount of money that it will cost the Canadian economy.”
Some AI experts are alarmed that the decision to release a powerful model such as Mythos rests with a commercial entity, without a mandatory third-party regulatory or auditing process to better understand the risks.
“The current status quo where private companies decide which models are released is harmful to society,” said Nicolas Papernot, co-director of the research program at the Canadian AI Safety Institute.
“Lawmakers need to work quickly to adapt the legislative framework to the current reality of generative AI being a new form of public infrastructure,” he added.
Canadian AI pioneer Yoshua Bengio, now co-president of non-profit LawZero, said it is “deeply concerning” that the job of defining and applying safety standards to models is solely left to companies.
“If current scientific trends continue, we will likely face an increasing number of similar cases,” he said, referring to Mythos. Mr. Bengio called for society to develop ways to ensure that government bodies and other experts can evaluate powerful models before they are released publicly. Such efforts require global co-ordination.
U.K. financial regulators urgently assess risks of Anthropic’s latest AI model, report says
In Canada, the federal government is preparing to release a new national AI strategy, with security as one of the pillars.
Shelly Bruce, former chief of the Communications Security Establishment, wrote in a submission as part of the consultations that operators of large AI models should be required to meet minimum security standards and conduct third-party risk assessments. She also said that Canada should take a leadership position and work with other AI safety institutes on security matters.
Filipe Dinis, former chief operating officer at the Bank of Canada and former chair of the CFRG, said that the rapid pace of change with AI models means that Canadian lawmakers and regulators need to think and act differently.
“The days of taking years, or even months to develop regulations, in my view, are gone,” Mr. Dinis said in an interview. Regulators and companies, he said, need to work together to establish boundaries around who should have access to these tools and what they can be used for.
“The financial sector, whether it’s in Canada or in other countries is very interconnected. So in many ways, a successful attack on one can easily be an attack on all,” he said. “The Canadian financial sector over the years has done a really good job in identifying collectively those risks and addressing them. But this model, in my mind, just brings it to another level.”
It’s unclear whether any Canadian organizations have been given the opportunity to test the new Mythos model.
“The Canadian government, major banks, and other key institutions should also partner with Anthropic to harden the digital infrastructure we depend on,” said Jaxson Khan, senior fellow at the Munk School of Global Affairs & Public Policy at the University of Toronto.
The Bank of Canada declined on Monday to provide further details about Friday’s meeting. The Department of Finance did not respond to a request for more information about the meeting.
The Office of the Superintendent of Financial Institutions said that it does not comment on specific technologies used by individual financial institutions.
“OSFI is aware and tracking developments related to the recent Anthropic model Mythos and the Project Glasswing providing early access to a selected number of organizations to assess and mitigate potential associated cyber risks,” OSFI spokesperson Cory Harding said in a statement.
Royal Bank of Canada deferred to a statement from the Canadian Bankers Association, which said Friday that its members support the responsible use of AI.
The rest of Canada’s six biggest banks – Toronto-Dominion Bank, Bank of Montreal, Bank of Nova Scotia, Canadian Imperial Bank of Commerce and National Bank of Canada – did not respond to requests for comment on whether they have tested or used Mythos.
With reports from Stefanie Marotta and Mark Rendell