Skip to main content
Open this photo in gallery:

A memorial popped up in the town of Tumbler Ridge, B.C., after the mass shooting in February.Christinne Muschi/The Canadian Press

Seven lawsuits have been filed in California against OpenAI and its chief executive officer Sam Altman on behalf of victims of the school shooting in Tumbler Ridge, B.C., alleging the tech giant’s negligence and the design defects of its flagship ChatGPT chatbot pushed the shooter toward the violence.

The suits, filed Wednesday in U.S. federal court in San Francisco, also all allege that the company avoided alerting police last year about the shooter’s violent interactions with its program because doing so would force it to create an internal system for reporting other violent users to the authorities.

That, in turn, the lawsuits allege, would expose the threat its signature product routinely poses to human life, which could complicate a coming initial public offering that could be worth a trillion dollars.

The victims represented in the suits include a 12-year-old girl who remains in hospital after getting shot in the head, five students between the ages of 12 and 13 who were killed, and Shannda Aviugana-Durand, a mother of two who was fatally gunned down while working as an educational assistant. The suits state two of the young victims were so disfigured from the gunshots that their families had to identify them by their clothing.

The families are all seeking punitive damages and the recovery of their legal costs, with those who lost loved ones also seeking pre-death economic losses.

OpenAI’s Altman ‘deeply sorry’ company didn’t flag Tumbler Ridge shooter’s messages to police

Jesse Van Rootselaar, 18, killed eight people in the northern B.C. community in February before killing herself in one of the deadliest mass shootings in modern Canadian history. Before she entered the tiny town’s school, she killed her mother and 11-year-old half-brother in their family home.

OpenAI declined to answer a number of questions from The Globe and Mail about the suits. Instead, spokesperson Jamie Radice sent a statement saying the company has already strengthened its safeguards in response to the tragedy. The statement also said OpenAI has also improved how it assesses and treats potential threats of violence by users of its chatbot and is getting better at rooting out people who repeatedly violate its policies.

B.C. Premier David Eby recently said the promises to better self-regulate do not go far enough and is calling on Ottawa to set national standards to ensure there is a minimum threshold of reporting “to make sure that the protection of the community, the protection of children, comes before the interests of shareholders.”

On Wednesday, he told reporters in Victoria asking about the lawsuits that two key questions remain: What was in the shooter’s chats that concerned multiple OpenAI staffers enough that they wanted to call police and why was the decision made not to do so?

In Ottawa, federal Artificial Intelligence Minister Evan Solomon was pressed by reporters about why Canada has not crafted laws regulating this technology. He said his government is studying exactly what the chatbots are doing before bringing in legislation. He added that the Canadian AI Safety Institute is working with OpenAI to assess its safety protocols in the wake of the shooting.

Tumbler Ridge shooting victim Maya Gebala is heading to L.A. for specialized treatment

Mr. Altman formally apologized to the Tumbler Ridge community earlier this month for not alerting police last year about the shooter’s problematic ChatGPT usage.

Jay Edelson, lead counsel for the plaintiffs in the U.S. lawsuits, said in an interview that this “empty apology” was only issued to “get out in front of” the coming legal challenges.

Mr. Edelson said he expects to eventually file more than two dozen lawsuits against Mr. Altman and OpenAI on behalf of Tumbler Ridge victims, with these American suits superseding the Canadian lawsuit filed recently by the family of the girl recovering from her gunshot wounds. These lawsuits, he said, will be grouped together as a mass action that allows a “few bellwether” cases to be picked to proceed to a trial before the others.

In the absence of regulation, Mr. Edelson said these lawsuits could offer the best window yet into the safety protocols of the chatbot by examining the shooter’s interactions with the program and the company’s reaction.

“We’re very eager to put [Mr. Altman] squarely on trial and put the DNA of OpenAI on trial so people understand exactly how these decisions are made, before it’s too late,” he said.

The suits also call into question the company’s assertion that it “banned” the shooter for problematic use and that she then “evaded” the company’s safety systems to rejoin ChatGPT.

Rather, the filings allege, OpenAI’s public troubleshooting guide helps users who are “deactivated” to sign up with another e-mail and start using the chatbot again. The suits allege the shooter simply signed up again with a different e-mail address but still using her real name.

The lawsuits are also seeking an injunction forcing OpenAI to ban similar users from reregistering; to flag these cases and review them before granting access; and to notify police when the company’s internal systems find a user might be violent.

The suits also ask a judge to force the company to terminate chat conversations that involve repeating or escalating violent ideas, as well as to warn the public that its design features can reinforce and escalate this type of thinking.

With reports from Stephanie Chambers and Justine Hunter

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe