Skip to main content
Open this photo in gallery:

Justice Minister Sean Fraser, responding to questions from journalists before a caucus meeting on Wednesday, was among the ministers who met with OpenAI executives.Justin Tang/The Canadian Press

Justice Minister Sean Fraser warned Wednesday that legislative changes could be brought in to regulate artificial-intelligence companies, unless OpenAI makes changes to its protocols after its failure to alert authorities to posts made by the Tumbler Ridge shooter.

Speaking to reporters in Ottawa, Mr. Fraser, who was among the ministers to meet on Tuesday evening with executives from OpenAI, which operates the AI chatbot ChatGPT, said the government wants to see proposals for rapid improvements.

“The message that we delivered, in no uncertain terms, was that we have an expectation that there are going to be changes implemented, and if they’re not forthcoming very quickly, the government’s going to be making changes,” he said.

OpenAI came under scrutiny after The Wall Street Journal reported last week that employees at the company wanted to warn law enforcement about the shooter’s interactions with ChatGPT, including descriptions of scenarios involving gun violence, but that they were rebuffed.

Prime Minister Mark Carney, who earlier this month visited Tumbler Ridge, B.C., where an 18-year-old fatally shot five children and an educator at her former secondary school as well as her mother and half-brother before killing herself, said he had sat with families and first responders and “saw the horrors of what happened.”

David Eby calls on Ottawa to regulate when AI providers must report their users to police

“Obviously, anything that anyone could have done to prevent that tragedy or future tragedies, must be done,” he told reporters in Ottawa Wednesday.

Canadian Identity Minister Marc Miller, who was also at the meeting with OpenAI, is working on an online harms bill to be introduced later this year. Meanwhile, Artificial Intelligence Minister Evan Solomon is working on an AI strategy for the government.

Speaking to reporters Wednesday, Mr. Solomon reiterated his disappointment that the OpenAI executives had not brought proposals for improvements to Tuesday’s meeting, such as on the thresholds that are met before alarming exchanges about violence with AI chatbots are reported to police.

“We are looking forward to some concrete proposals. We are disappointed that by the time they came up here, they did not have something more concrete to offer,” he said.

Asked whether he was considering banning ChatGPT in Canada, Mr. Solomon replied, “I would say all options are on the table.”

Peter Wall, Mr. Solomon’s spokesperson, said afterward in a text message that banning ChatGPT from Canada was definitely not an option being considered by the AI Minister.

OpenAI did not mention Tumbler Ridge shooter’s posts in meeting with B.C. officials day after mass shooting: province

The discussions at the meeting focused on how an “imminent and credible risk” is identified by the tech company as a threshold for reporting alarming posts to police.

Mr. Solomon said there had been a failure in the measures taken, which had led to a terrible tragedy. He said he expects the company to come back with “hard proposals” and “concrete action” soon.

But Michael Geist, the University of Ottawa’s Canada Research Chair in internet law, said there needs to be greater transparency about the standards that AI companies apply for reporting to the police.

“The public should know how their content is monitored, the standards used for action such as account bans or police reporting, and data on how frequently these actions occur,” he said in a text message.

“The standard that OpenAI adopted – an ‘imminent and credible risk of serious physical harm to others’ – sounds reasonable since a high standard should be used before reporting to police. But whether that standard was met in this case depends on information that isn’t publicly available.”

Public Safety Minister Gary Anandasangaree said the meeting with OpenAI executives was a “critical first step with OpenAI.”

“There’s still a lot of unanswered questions, and there’s certainly a sense of frustration, and, frankly, a sense that tech companies overall are not doing enough to address the issues around information that they hold,” he told reporters.

The standing Senate committee on social affairs, science and technology started a series of meetings Wednesday about the governance and security of AI, including chatbots. Senators asked government officials what steps Ottawa is taking to ensure a tragedy like Tumbler Ridge never happens again.

B.C. Senator Margo Greenwood asked senior officials whether they would introduce a new legal framework governing AI and reporting to law enforcement. She also asked whether the federal government plans to take steps to “hold companies accountable if they withhold information that leads to a great tragedy.”

Mark Schaan, associate deputy minister at Innovation, Science and Economic Development Canada (ISED) who advises the AI Minister, replied that Tumbler Ridge is “a tragedy that we need to ensure is never repeated.”

He said Canadians expect online platforms to have robust safety protocols and escalation practices in place, saying that government departments are examining their “toolkit” including private-sector privacy laws, the forthcoming online harms bill and the criminal code for potential action.

Conservative MP Michelle Rempel Garner told reporters she was “concerned about the government’s pace” on addressing issues posed by AI, saying her party would be willing to collaborate on “smart policy” and discussions on the topic.

With a report from Emily Haws

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe