
Alphabet CEO Sundar Pichai speaks at a Google event in Mountain View, Calif., in 2024. The platform claims it isn't a publisher or producer, but neutral intermediary, despite actively designing systems that decide what people can see.Jeff Chiu/The Associated Press
Vass Bednar is a contributing columnist for The Globe and Mail and host of the podcast Lately. She is the managing director of the Canadian SHIELD Institute and co-author of The Big Fix.
If your toaster catches fire and burns your kitchen down, you have legal recourse. You can sue the manufacturer under product liability law, which holds companies responsible for the physical harm that defective products cause. But if a platform’s recommender system or other AI tools (such as LLMs) serve content that stokes eating disorders in teens, enables rampant harassment, or steers users toward financial scams, there’s little (if any) recourse at all.
Why? Despite the immense influence digital platforms have over our economy, politics and well-being, their core products – large language models, recommender systems and other AI-based tools – and the harms they cause remain under-regulated.
Product liability law was designed to protect consumers from unsafe or defective goods. In most countries, it applies strict standards to manufacturers, meaning they can be held accountable even if they didn’t intend to cause harm. This has helped instill a culture of care and responsibility. Yet somehow, Big Tech has sidestepped this system almost entirely.
Opinion: In the AI revolution, universities are up against the wall
Their products aren’t any safer than your toaster. But software doesn’t fit neatly into product liability law designed for physical products, and their harms – be they financial or psychological – are less obvious, if not less immediate.
Many courts still don’t consider software a “product” under traditional product liability law, and the harms are quite difficult to assess and prove in liability law. Meanwhile, platforms such as Meta and Google claim they’re not publishers or producers, but neutral intermediaries, despite actively designing systems that shape what people see, click, pay and believe.
This legal exceptionalism is especially glaring given the growing harms associated with algorithmic systems: discrimination, misinformation, manipulation, addiction and market distortion. A 2023 study in Science Advances found that YouTube’s recommender system can lead users down conspiracy-laden rabbit holes. Social-media platforms have been linked to mental-health declines in youth. Generative AI systems are already producing fake legal cases, fraudulent ads and deceptive images at scale. It’s a mess.
Even stranger, people are increasingly forming emotional connections with AI, often turning to chatbots for companionship, advice and even intimacy. These relationships can have troubling consequences. Some people develop dependencies on AI systems that aren’t actually sentient, projecting meaning onto responses that are generated probabilistically. When chatbots “hallucinate” (the technical term for producing false or misleading information) the results can be deeply destabilizing.
In a widely discussed New York Times story, a reporter engaged with Microsoft’s chatbot and was disturbed when it expressed love for him and urged him to leave his wife. In another case, a Belgian man who had formed a close bond with an AI assistant during a mental-health crisis died by suicide after the chatbot encouraged extreme thinking. These incidents reveal that while AI can simulate emotional understanding, it lacks real empathy, and when people treat it as a source of truth or comfort, it can lead them down dangerous paths.
AI adoption is upending the job market for entry-level workers
In other sectors, if a company released a product this prone to harm, regulators would step in and issue a recall. Why haven’t they?
Part of the problem is Section 230 of the U.S. Communications Decency Act, which shields platforms from liability for user-generated content. But that shield has been stretched to cover algorithmic design decisions: choices made by companies, not users. In Canada, we haven’t enshrined such protections in law, but we also haven’t challenged the logic behind them.
It doesn’t have to be this way. Europe is moving forward with updated product liability rules that explicitly include software and AI systems. Its approach says clearly: If a digital system causes harm, the company behind it should be answerable. Despite protest from major companies, this isn’t about stifling innovation, rather, it’s about aligning incentives for safer design and clearer accountability.
We should do the same in Canada. That could mean updating our product liability laws to cover algorithmic systems, especially when they interact with vulnerable populations. It could mean introducing a duty of care for platforms, requiring risk assessments and redress mechanisms. And it could mean rejecting the idea that complex digital products are somehow exempt from the same expectations we place on toasters, toys and cars.
In Canada, the Toronto District School Board is suing Meta, Snap and ByteDance for $1.6-billion in damages, arguing that their platforms harmed students and framing the lawsuit as a product liability case. It’s an intriguing test of whether we’re finally ready to treat digital systems as what they are: products. These digital products should be subject to the same duty of care as any other.