Skip to main content

Tumbler Ridge tragedy underscores need for regulation, but getting that right can be tricky. Getting it wrong can be dangerous

Open this photo in gallery:

iStockPhoto / Getty Images

When it comes to the dangers posed by chatbots, much of the conversation has centred on the potential harms caused by what these applications can tell us. Chatbots powered by artificial intelligence can be sycophantic and reinforce our worldview by mirroring our language and thoughts. Those traits are at issue in instances of teen suicide and for people who say they have experienced delusions from talking to chatbots at length.

The case involving the Tumbler Ridge, B.C., shooter appears to involve the reverse – information a user confided to a chatbot.

What exactly 18-year-old Jesse Van Rootselaar discussed with OpenAI’s ChatGPT months before fatally shooting eight people on Feb. 10 and then killing herself has not been disclosed, nor do we know what the chatbot said in reply. We do know that OpenAI had flagged her conversations but opted not to contact law enforcement last summer.

Open this photo in gallery:

The deaths of eight people in Tumbler Ridge, B.C., marked one of the worst mass shootings in recent Canadian history on Feb. 10.Jennfier Gauthier/Reuters

The incident has exposed a glaring oversight gap when it comes to AI, and how the rapidly advancing technology is presenting novel issues that defy easy answers. Under what circumstances, for example, should AI companies report potentially dangerous interactions to law enforcement?

“What really strikes me here is the revelation that OpenAI is recording potentially all user chats and sending chat logs to law enforcement on a selective and proactive basis,” said Blair Attard-Frost, an assistant professor at the University of Alberta who studies AI governance. “AI companies in Canada have been given significant latitude to decide on their own safety standards.”

Opinion: Can understanding the Tumbler Ridge shooting help prevent future tragedies?

Remembering the Tumbler Ridge shooting victims: Eight lives lost

The incident has also highlighted the power and responsibility wielded by AI companies. ChatGPT has roughly 800 million users, close to 10 per cent of the world’s population. Some people are sharing intensely personal thoughts and feelings with chatbots, treating them as trusted companions or therapists, when in reality these are products operated by corporations that have little to no duty of care to users. When those conversations turn to harming others, there is no rulebook in Canada for what AI companies should do next.

For some experts, the incident reinforces the need for Canada to introduce legislation for AI companies to protect public safety and guard privacy. B.C. Premier David Eby, for one, has called for rules for when AI companies alert police.

Canada has no overarching AI legislation, and unlike some other jurisdictions, does not have a set of rules that apply specifically to chatbots. In fact, in the first public speech given by federal AI Minister Evan Solomon last year, he said that Canada would avoid “over-indexing on warnings and regulation” about the technology to take advantage of the economic benefits of AI.

“Our approach has always been to make sure that we are building a safe and reliable environment,” Mr. Solomon told reporters Thursday. “But the urgency has changed.”

Open this photo in gallery:

B.C. Premier David Eby speaks after the province declared a day of mourning at the legislature in Victoria on Feb. 12.CHAD HIPOLITO/The Canadian Press

The federal government has been looking at updated privacy and online harms legislation, which could touch on AI platforms. Neither bill has been introduced yet, nor has the government indicated whether chatbots will be covered by online harms, as some experts have urged.

Tackling the issue is fraught. Should AI companies define their own procedures for reporting to law enforcement? Or should government? And how will measures be enforced? Any provisions would need to strike a balance between privacy and safety, and take care to set appropriate reporting thresholds. Too low, and police could be showing up at the homes of Canadians over benign conversations. Too high, and some tragedies may not be averted.

For some experts, these questions are coming too late. The real-world dangers of AI are well known, but regulation in Canada has not kept pace. “We could be in a much better place had there been some more serious discussions,” said Fenwick McKelvey, associate professor of communication studies at Concordia University. “None of this was unexpected.”

The desire to leap to regulation after a tragedy like that in Tumbler Ridge is understandable, but AI companies have not been transparent about how they report to police, making it difficult to assess what needs fixing. “It’s really hard to talk about a regulatory solution when there’s a complete vacuum about what we know,” Prof. McKelvey said. The fact that Mr. Solomon had to hold a meeting with OpenAI on Tuesday to learn about its safety protocols underscores that reality.

Open this photo in gallery:

Residents hug as they place flowers at a memorial for the victims of the mass shooting in Tumbler Ridge, B.C.Christinne Muschi/The Canadian Press

According to The Wall Street Journal, Ms. Rootselaar discussed scenarios involving gun violence with ChatGPT over several days last year, and these conversations were flagged by an automated review system. About a dozen employees debated whether to contact law enforcement, but OpenAI leaders decided against it. The company banned her account in June, 2025.

OpenAI has said that it refers cases to authorities when a user presents an imminent and credible risk of serious physical harm to others. The conversations did not meet that bar because the company did not identify credible or imminent planning, according to OpenAI.

“Were these people equipped to make that kind of judgment call and should they or OpenAI be in that position?” said Katrina Ingram, founder of Ethically Aligned AI, a consultancy in Edmonton. “In the absence of any other rules or regulations, private companies will set their own policies.”

Open this photo in gallery:

Minister of Artificial Intelligence Evan Solomon on his way to a caucus meeting in Ottawa earlier this week.Justin Tang/The Canadian Press

In a letter sent by OpenAI to Mr. Solomon and other ministers on Thursday, the company offered a little more detail about its procedures. Vice-president of global policy Ann O’Leary wrote that “several months ago” OpenAI worked with mental health and law enforcement professionals to refine criteria for when conversations merit a referral to authorities.

“Mental health and behavioural experts now help us assess difficult cases,” Ms. O’Leary wrote, adding that OpenAI’s criteria are more flexible to account for the fact that a user might not discuss the target, means or timing of planned violence, but an imminent risk could still be present. “Under our enhanced law enforcement referral protocol, we would refer the account banned in June 2025 to law enforcement if it were discovered today,” she wrote.

Tumbler Ridge shooter’s ChatGPT messages were flagged months before attack

The letter does not make clear when exactly those changes went into place, and still leaves questions unanswered, such as more detail on the company’s refined risk criteria and how further progress will be monitored and assured. “This looks like an attempt to preserve the status quo of industry self-regulation through voluntary commitments,” said Prof. Attard-Frost, adding that action taken by one company does not ensure that other industry players will adopt the same standards.

In a statement Friday, Mr. Solomon said he will be meeting with OpenAI chief executive Sam Altman next week and will be seeking more details from the company, including how human review is conducted. “We have not yet seen a detailed plan for how these commitments will be implemented in practice,” he said. “All options remain on the table.” He will also meet with other major platforms in the coming weeks.

OpenAI is not the only tech giant with a consumer-facing chatbot. Google and Anthropic did not reply to requests for comment about their own procedures.

A spokesperson for Meta Platforms Inc. declined to comment but provided links to policy stating the company may notify law enforcement about emergency situations, such as risk of death or imminent bodily harm. The spokesperson also sent a video about how Meta stopped a suspected school shooting in the U.S. by reporting social media content.

Chatbots are more private and intimate than social media. Mr. Altman has acknowledged that some people are sharing deeply personal matters with ChatGPT. “Young people especially use it as a therapist,” he said in a podcast interview last year. The company was still figuring out the privacy implications, he said, but argued it should be afforded strong protections from disclosure.

“If you go talk to ChatGPT about your most sensitive stuff and there’s a lawsuit, we could be required to produce that,” he said. “That’s very screwed up. I think we should have the same concept of privacy for your conversations with AI that we do with a therapist.”

While therapist notes can be subpoenaed, Mr. Altman’s desire that the company should be granted any of the same privileges as mental health providers is striking. While people may treat ChatGPT like a therapist, OpenAI is beholden to none of the same standards as mental health professionals in Canada. Moreover, the incentives are skewed. Chatbot providers have a financial stake in keeping conversations flowing, and these interactions can be valuable to help train future AI models and deliver targeted advertising. “There’s a willingness to think about these conversations as profitable, but not the liability embedded in them,” Prof. McKelvey said.

Open this photo in gallery:

iStockPhoto / Getty Images

Therapists draw on a lot of context to determine whether someone presents a risk of harm to themselves or others, including personal histories and existing diagnoses, and may consult others, such as clinical supervisors. “There is a necessity for you to be well-trained, and have a really sound system for understanding what is presented in front of you,” said Candice Alder, a psychotherapist in B.C. and teaching fellow at the Center for AI and Digital Policy.

An AI company looking at a chat transcript does not have the same context, making an assessment more difficult. Expressing harmful thoughts does not mean someone will act on them, either. “I can tell you as a therapist the kinds of things that young people say on the internet are not always a reflection of exactly what is going on,” she said.

Because of the personal nature of chatbots and the sensitive data amassed by AI companies, the potential for these applications to cause harm to the public is only growing, experts say. Many professionals that hold influence over individuals, such as law and medicine, have regulatory bodies and standards. “I don’t see why it should be any different for companies that offer a product that is now embedded in the lives of a large part of the population,” said Vincent Denault, assistant professor at the University of Montreal’s School of Criminology.

Teenager identified as Tumbler Ridge school shooter had struggled with mental health

On that front, other jurisdictions are further ahead than Canada. The European Union’s AI Act requires developers of general-purpose AI systems to perform safety tests and mitigate risks. Proposed federal AI regulation in the U.S. puts a “duty of care” on developers to prevent and mitigate foreseeable harm to users. It would also require companies to regularly assess how their systems can contribute to psychological harms.

Both New York and California have legislation requiring chatbot providers to notify users that they are not talking to a human, and have protocols for preventing suicidal ideation and self-harm.

Open this photo in gallery:

The Canadian flag hangs at half-mast at the legislature in Victoria, B.C., on Feb. 11.CHAD HIPOLITO/The Canadian Press

Canada is also the only G7 country that has no online harms legislation and no digital safety regulator. The EU’s Digital Services Act requires online platforms to report when they become aware of information indicating a threat to life and safety, but not to actively monitor communications. The European Commission is assessing whether ChatGPT is covered by the legislation.

Emily Laidlaw, associate law professor at the University of Calgary, said Canada could draw from the European approach. “There’s some room for Canada to consider what would be appropriate here to add to law, but it will still always be a baseline,” she said.

Setting that threshold is tricky. When the federal government was working on online harms a few years ago, initial proposals requiring social media platforms to report harmful content to law enforcement alarmed experts. “It was so broadly framed that the pushback was pretty extreme,” Prof. Laidlaw said.

Open this photo in gallery:

A memorial on the steps of the town hall in Tumbler Ridge.Jennfier Gauthier/Reuters

Indeed, there is a risk of infringing on privacy and civil liberties if company policies or proposed government measures go too far. If tech companies are required to monitor and report chatbot interactions, why stop there? Any form of written communication held by tech companies – texts, e-mails, searches – could theoretically provide hints of an impending crime. That kind of regime veers into a surveillance dystopia.

Even if the threshold is sufficiently high, the incentive for companies may be to overreport to reduce their liability and ensure compliance. More cases flagged to the police, even if not entirely credible, could have the unintended consequence of causing harm to the public. “This might disproportionately impact certain groups of people who get falsely flagged,” Ms. Ingram said. “That would be something to address in the process and to ensure a redress mechanism.”

Still, the same problem can arise if companies are permitted to develop their own policies. “Inevitably, without a mandated standard we end up with inconsistent results,” said Jon Penney, an associate professor at York University who researches AI and the law.

Prof. Penney said that any measure has to be narrowly tailored and specific about which threats need to be reported, have a “common sense” standard applied to what constitutes an imminent threat, and codify the factors companies should use in exercising discretion. Transparency is key, too. The law should compel companies to disclose their procedures.

“We cannot simply leave it to companies, who almost surely are weighing not just privacy and public safety, but also corporate, brand, profit, and reputational considerations,” he said.

Earlier this week, Justice Minister Sean Fraser warned of legislative changes should OpenAI not improve its safety protocols. Now that OpenAI has started that process – its letter said it would continue enhancing procedures, develop a direct point of contact with Canadian law enforcement and better detect users who repeatedly violate its policies – the next step may be with government.

But the letter also revealed major shortcomings. The fact that OpenAI is vowing to establish a point of contact with Canadian law enforcement suggests it did not have one before. It also failed to detect that Ms. Van Rootselaar had a second ChatGPT account, and only discovered it after her name had been made public.

“It’s actually proof that their safeguards failed twice, first in deciding not to refer the original account to law enforcement, and then in failing to catch a repeat offender re-entering their platform,” said Helen Hayes, associate director of policy at the Centre for Media, Technology and Democracy. “The commitments in this letter should be read as a response to systemic failure, not an isolated error.”

While the focus on AI is justified, especially because it is so new, it is worth remembering that existing systems have flaws, too.

In the case of Tumbler Ridge, police had visited the home of the shooter multiple times over the past few years owing to mental health issues. Officers seized firearms from her home two years ago, but someone in the family successfully petitioned for their return.

“The important thing is to look at the whole ecosystem of intervention,” Ms. Alder said. “Not just the technology.”

With a report from Irene Galea

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe

Trending