Skip to main content
opinion
Open this photo in gallery:

People check their phones as AMECA, an AI robot, looks on at the All In artificial intelligence conference on Sept. 28, 2023, in Montreal.Ryan Remiorz/The Canadian Press

Clifton van der Linden is associate professor and director of the Digital Society Lab at McMaster University. He is also the founder and chief executive officer of Vox Pop Labs.

Many Canadians are worried about artificial intelligence. While the sentiment is well-warranted in several ways, the technology is poised to be a cornerstone of the innovation economy in the coming years. In order for Canada to capitalize on the economic opportunities ahead, it will be essential to make AI worthy of public trust.

According to the Edelman Trust Barometer 2024 annual survey, only 31 per cent of Canadians trust AI – 19 points below the global average. (The sample includes 1,500 respondents from Canada. The margin of error for the Canadian data is plus or minus 3.3 to plus or minus 3.9 percentage points, 99 times out of 100.)

Among the concerns Canadians have about AI are fears of job displacement, mishandling of personal data, and the reinforcement of unfair biases in areas such as hiring and policing. There is also apprehension about the technology being used to spread misinformation and undermine privacy.

The federal government has taken steps to promote the development of responsible AI, which strives for AI that is deemed trustworthy. However, this approach too often focuses on the technology itself and runs the risk of underestimating or altogether overlooking the role of public institutions in brokering trust.

Observers of waning trust in public institutions often lament the consequences for democratic stability, but the economic implications are no less consequential. To shore up public confidence in AI, the focus must extend beyond the technology itself to the systems that regulate and oversee its development and use.

The analogy of food safety illustrates this point. Most Canadians consume the food that they purchase from grocery stores, restaurants and farmers’ markets without fear that it will cause them mortal harm.

This is not because food itself is inherently trustworthy or because Canadians are particularly knowledgeable about the inner workings of the food production system, but because they have confidence in the regulatory frameworks put in place to ensure their food is safe to eat.

At the same time, Canadians have considerable latitude to make choices about which potential harms they are willing to accept. Thanks to provisions such as food guides and nutritional labelling, informed decisions can be made as to the implications of consuming foods with certain ingredients.

The system works because Canadians by and large put their faith in the public institutions charged with its oversight. We should strive for a similar approach with AI.

Rather than download responsibility onto individual Canadians to evaluate claims of the trustworthiness of AI, Canada’s public institutions need to earn the trust required to provide Canadians with assurances that they ensure the technology’s safety. At the same time, the institutions need to give Canadians certain discretion to make informed decisions about the level of risk they are willing to take on.

Of course, some may question the wisdom of invoking references to food safety at a time when the Canadian Food Inspection Agency is under fire for the failure of its algorithms to detect a deadly listeria outbreak. But this in many ways exemplifies how the locus of public trust resides with the institution and not with the technology.

While an algorithm is ostensibly at fault, accountability still sits with the institution that sanctioned and oversaw its use. Trust in an emerging technology may well have been compromised among bureaucrats, but the greater risk is that the public loses confidence in the institutions charged with safeguarding the food supply.

The public’s trust in new technologies like AI ultimately hinges on their trust in the institutions responsible for its oversight and regulation. When these institutions falter or fail to demonstrate their accountability, public confidence in innovation can erode swiftly.

The stakes are particularly high in Canada, where the Edelman survey finds that only 49 per cent of Canadians express confidence in government when it comes to introducing innovations into society. This erosion of trust has significant consequences – not just for public confidence in new technologies like AI but for Canada’s capacity to lead in the global innovation economy.

If Canadians are skeptical of their government’s ability to oversee and regulate transformative technologies, the ripple effects could hinder adoption, stifle investment and compromise the country’s competitive edge.

Building trust in AI is not simply about making technology better – it’s about reimagining the relationship between Canadians and the institutions that serve them. This approach demands more than regulatory oversight; it requires public institutions to engage in critical introspection and transformation.

Just as AI systems are being reimagined to prioritize ethical design and transparency, so, too, must institutions evolve to serve as credible brokers of trust in an increasingly complex technological landscape.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe