Skip to main content
opinion
Open this photo in gallery:

Students in France look at OpenAI's ChatGPT app on their phones. As digital products, AI models are excluded from some provisions of the Canada Consumer Product Safety Act.SEBASTIEN BOZON/AFP/Getty Images

Vass Bednar is the managing director of the Canadian Shield Institute and co-author of The Big Fix.

The Government of Canada recently recalled Tim Hortons’ pink and white colour-changing donut mug because it may crack or break when filled with hot liquid – which sort of defeats the purpose of a ceramic cup.

The Canada Consumer Product Safety Act prohibits recalled products from being redistributed, sold or even given away for free. It’s why if there’s a kettle that might spontaneously explode or a crib that occasionally collapses, they are instantly removed from the market and there are massive public warnings.

But the logic that governs the consumer-facing economy falls apart in a digital one. A little-acknowledged quirk of the legislation is that the Act doesn’t cover products that are digital in nature, instead governing only physical products, including their components, parts or accessories.

That means that Canada can recall a coffee mug that can crack under ordinary use but doesn’t deal with digital products that fail in ordinary use every single day. That’s a pretty serious gap when large language models are being marketed as assistants, despite regularly producing erroneous facts and confidently giving wrong advice.

The worst part is that companies know these systems do not work consistently. That their terms of use warn of potential mistakes makes that plain.

OpenAI violated Canadian privacy laws in developing first ChatGPT model, probe finds

What AI companies want is the financial upside of mass adoption without the ordinary obligations that come with selling something that malfunctions. And we are helping them out by casually normalizing the adoption of AI products while letting the companies that produce and use them disclaim the most basic duty of care.

And yet we are told that these tech tools promise innovation. We are told that the future of work depends on adopting these systems that have, in fact, been shown to create more work: More verification, editing, monitoring and more noise. The promised productivity gains are proving to be less of an industrial breakthrough and something closer to a sophisticated charade.

We’ve seen glimpses of what actual accountability can and should look like, but it’s not enforced consistently. In 2024, when Air Canada’s AI-powered chatbot provided false information to a customer about bereavement fares, the company was found to be liable by a B.C. small claims adjudicator. That case established a basic principle that unfortunately hasn’t stuck: Firms cannot escape accountability just because bad information came from a chatbot instead of a human employee. Claiming that “AI did it” should not be accepted as a serious defence.

So now companies are cautioning us against trusting their AI products while simultaneously shirking responsibility for them. In an update to their terms of use last fall, Microsoft noted that Copilot is only for entertainment purposes (though it has since said that is “legacy language” and will be updated again). Last year, the company rolled out new Copilot features in Excel but cautioned that you shouldn’t use them if you want accurate results. Target has helpfully warned people that if their AI shopping agent makes a mistake, you have to pay for it. And somewhat comedically, OpenAI CEO Sam Altman recently appeared in a viral Instagram reel acknowledging that ChatGPT can’t use a basic timer.

Right now, AI is demonstrably making things worse, and sillier. Fake AI citations are polluting academic journals. Lawyers are citing made up cases. Deloitte Canada allegedly submitted AI-generated research in a million-dollar report for the province of Newfoundland and Labrador. AI chatbots may even fuel delusional thinking. We’ve accepted hallucinations as a standard feature of these systems, instead of rejecting them and the outcomes they can prompt as a dangerous bug.

Apple reaches $250-million settlement over claims it deceived consumers on AI

The digital economy has enjoyed a long and luxurious grace period where “move fast and break things” was a celebrated business model. But AI hallucinations are now being pushed into too many corners of public and economic life for that indulgence to continue.

We need to decide whether we are prepared to let an entire category of defective products seep into daily life in our dogged pursuit of productivity without imposing the most basic obligations of safety and accountability.

It’s obvious that these AI models, while novel, just aren’t reliable. So why are we continuing to embed erratic AI systems in workplaces, classrooms, hospitals and public services as though the harms are an acceptable part of the tech?

If we aren’t going to regulate them as products, we can at least police the claims that companies make about their programs through the Competition Bureau’s deceptive marketing standards. But we’re deceiving ourselves if we think that’s enough. LLM outputs may seem different from a cracked mug, but the dangers still spill over.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe