Skip to main content
opinion
Open this photo in gallery:

A woman visits a memorial on the steps of the town hall in Tumbler Ridge, B.C., on Feb. 14.Jennfier Gauthier/Reuters

There are two big questions hanging over the artificial intelligence industry: Whether these companies are worth their sky-high valuations and whether they can be trusted to control their disruptive invention.

While the first concern is of pressing importance to the world economy the second matters even more, with implications for the future of humanity.

The typical answer from tech leaders comes in two parts. First, that competition requires them to push the envelope and, second, that regulation will stifle innovation. That might sound good in a lobbyist’s presentation, but a recent real-world example in British Columbia points to the tragic downside of under-regulation.

As first reported by the Wall Street Journal, the mass murderer in Tumbler Ridge used the OpenAI chatbot months before the shootings. The interactions with ChatGPT have not been made public but they reportedly included scenarios involving gun violence. They were sufficiently disturbing that they set off red flags at OpenAI and were reviewed by staff.

The company kicked Jesse van Rootselaar off its platform but seems to have taken no further action at that time.

Tumbler Ridge students start returning to class after school shooting

It is obvious now that this was the wrong decision. Crucially, this does not require the benefit of hindsight. According to the Journal, some OpenAI employees were concerned enough at the time that they wanted to share information with law enforcement. They were overruled.

OpenAI seems to have concluded that the potential downside of being wrong in not going to the police – gun violence – was less bad than the downside of being wrong in going to the police about a person later determined to pose no threat.

This points to the problem with these companies setting their own, secret, safety protocols. Canadians can’t have faith that such policies correctly balance user privacy and public security.

That’s why B.C. Premier David Eby was on the right track when he called for the federal government to impose rules mandating when AI companies must notify police about their users. And it’s why Evan Solomon, the Minister for Artificial Intelligence and Digital Innovation, got it wrong when he called for OpenAI to come forward with stricter standards.

It’s not up to the company to set the terms for its own conduct, that’s the role of government.

However, Mr. Solomon was right to say Thursday that his focus extends to other providers as well, and not just OpenAI. And it’s encouraging that Marc Miller, the Minister of Canadian Identity and Culture, has signalled that chatbot interactions with children and vulnerable people will be included in an online harms bill being prepared for introduction later this year. That’s a start.

The company said Thursday that it changed its policies several months ago, and under those policies it would have flagged the interactions to law enforcement. And it promised to take additional steps. The government should not let that offer dissuade it from enacting regulations.

This space has argued that social media companies should be held responsible for the content posted on their platforms. The same could be said for content generated by AI, giving their parent companies an obligation to act. Under Bill C-63, the original online harms act that died after Justin Trudeau resigned, platforms could no longer duck responsibility for the content they carried. Its replacement should do the same.

Regulating AI will be an uphill battle. Not only are the tech companies trying to create a sense of inevitability about their product, many countries are eager to get a piece of the pie. Governments that want investment may be unwilling to clamp down on the companies they are courting.

There have been apocalyptic warnings about AI. But there are less dramatic worries. Author and internet activist Cory Doctorow compares AI to “asbestos in the walls of our technological society.” He warned that “we will be excavating it for a generation or more.”

The comparison may prove apt. Recall that asbestos was a wonder technology of its time. Its risks become clear later but, once installed, it carries a high cost to remove safely. Proper foreknowledge of the danger could have saved a lot of lives and money.

A famous quote from First World War-era French prime minister Georges Clemenceau is usually translated that war is too important to leave to generals. In the same vein, AI is too dangerous to leave to the tech bros.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe