B.C. Premier David Eby in North Vancouver on Feb. 9.ETHAN CAIRNS/The Canadian Press
British Columbia’s Premier is calling on the federal government to introduce rules for when artificial intelligence providers such as OpenAI must contact police in response to how users are interacting with their platforms, after revelations that the shooter in Tumbler Ridge was banned from the company’s ChatGPT service last year.
Premier David Eby said companies that offer AI chatbots should not be left to decide for themselves whether the police need to be alerted to a user’s behaviour. He also said the province will hold a coroner’s inquest or public inquiry if the public does not get answers about what happened through the justice system.
He made the remarks on Tuesday in Victoria, as AI Minister Evan Solomon was preparing to meet with OpenAI officials in Ottawa to discuss their safety protocols.
AI minister summons OpenAI safety chiefs over Tumbler Ridge shooting
“The federal government needs a reporting threshold for all artificial intelligence companies that deliver services in Canada, where they must report to law enforcement, so there’s no judgment calls in a back room that Canadians don’t have a line of sight to that put our kids and families at risk,” Mr. Eby said at a news conference.
“The news that OpenAI might have had the opportunity to stop this terrible tragedy in Tumbler Ridge is just devastating for families in Tumbler Ridge.”
OpenAI has been under scrutiny since the Wall Street Journal reported last week that employees at the company wanted to warn law enforcement about the shooter’s interactions with ChatGPT, including descriptions of scenarios involving gun violence, but that they were rebuffed.
The company has since confirmed that it banned the shooter from using the platform but did not alert police until after the shootings in Tumbler Ridge earlier this month, in which five children and an educator at the community’s secondary school were killed, as well as the shooter’s mother and brother. The shooter then died by suicide.
Mr. Eby said OpenAI owes the victims’ families an accounting of what they knew, and how they responded when they flagged concerns about the shooter’s online interactions.
“I want them to meet with the families. I want them to look in the eyes of these families and tell them why they made the call they did. And ultimately, I want British Columbians to know what they knew,” he said.
Mr. Solomon responded to the revelations about OpenAI’s connection to the shooter by summoning company officials to a meeting in Ottawa on Tuesday evening.
Artificial Intelligence Minister Evan Solomon in Ottawa on Tuesday.Justin Tang/The Canadian Press
Mr. Solomon asked the tech company to explain its safety protocols, and specifically what it does to protect Canadians from harm.
Details of the Tumbler Ridge shooter’s ChatGPT interactions were not discussed at the meeting because of the continuing RCMP investigation.
The meeting, held at the Department of Innovation, Science and Technology, was also attended by the Minister of Public Safety, Gary Anandasangaree; the Minister of Justice and Attorney-General of Canada, Sean Fraser; and Marc Miller, the Canadian Identity Minister.
In a statement after the meeting, Mr. Solomon said the ministers had made it clear “that Canadians expect credible warning signs of serious violence to be escalated in a timely and responsible way.”
He added, “Internal review alone is not sufficient when public safety is at stake.”
The discussions focused on how an “imminent and credible risk” is identified by the tech company, how cases move from automated detection to human review, and how referrals are handled, particularly when young people may be involved.
“We expressed our disappointment that no substantial new safety measures were presented at this time,” he said.
Online harms bill needs framework for reporting threats in AI chats, experts say
OpenAI told the ministers it plans to return shortly “with more concrete proposals tailored to the Canadian context,” according to the statement.
“We are reviewing broader measures to ensure that AI systems and platforms operating in Canada have clear standards and accountability,” Mr. Solomon said.
Mr. Miller’s department is working on an online harms bill for an expected introduction later this year.
In a recent interview with The Globe and Mail, he indicated that AI chatbots’ interactions with young and vulnerable people were likely to be addressed by that bill.
Taylor Owen, founding director of McGill University’s Centre for Media, Technology and Democracy, and a member of the federal task force advising Ottawa on its forthcoming AI strategy, wrote to Mr. Solomon and Mr. Miller on Tuesday warning that the failure to report the shooter’s posts exposes a gaping hole in Canadian regulations of AI chatbots.
“This tragedy has become another example of real-world harms caused by AI systems,” he wrote.
Mr. Owen said summoning OpenAI to explain its safety protocols was “the right instinct” by Mr. Solomon, but he said this should not have been necessary.
“Had Canada established an online safety regulator that included chatbots in its scope, the government would already know how these companies flag dangerous content, what their escalation thresholds are, how they handle cross-border referrals, and whether their systems are adequate,” he wrote.
The government response should not be to require AI companies to monitor and report private conversations with chatbots to law enforcement, as this raises serious privacy concerns, he added.
“What is needed is a broader regulatory framework that addresses the upstream design decisions and safety architectures that allowed these situations to arise in the first place.”
OpenAI confirmed Friday that the shooter’s account was banned last June for violating the company’s usage policy, but said that her activity did not meet the company’s threshold for notifying law enforcement. A user’s messages to the chatbot would have to indicate an “imminent and credible risk of serious physical harm to others” for that threshold to be met, OpenAI said in a statement.
With reports from Andrea Woo