Skip to main content
Open this photo in gallery:

The Peace Tower on Parliament Hill in Ottawa on Jan. 8. The prospects for the revival of an AI bill in a new session of Parliament or after an election are uncertain.Sean Kilpatrick/The Canadian Press

Many governments were gripped two years ago by the need to pass legislation to regulate artificial intelligence, heeding warnings from experts about the dire risks posed by the powerful technology.

But the momentum has slowed in Canada and the United States, if not ground to a halt. An AI bill in Canada died this month when Prime Minister Justin Trudeau prorogued Parliament. The prospects for a revival of the bill in a new session of Parliament or after an election are uncertain. The incoming Trump administration in the U.S., meanwhile, has signalled its preference to let AI development flourish rather than impose guardrails.

The United Kingdom has changed its tone, too. Under then prime minister Rishi Sunak, the Conservative government was concerned about tackling the risks stemming from frontier AI models, a term used for the most powerful iteration of the technology. Under new Labour Prime Minister Keir Starmer, the government this month unveiled a plan that “mainlines AI into the veins” of the country. “Because for too long we have allowed blockers to control the public discourse and get in the way of growth,” the announcement said.

In Canada, some experts say the failure to pass AI regulation creates uncertainty for business and leaves citizens vulnerable as the technology continues to develop. But it also provides an opportunity to rethink Canada’s approach, which has been widely criticized.

“It’s unfortunate we couldn’t get some bill passed,” said Karim Bardeesy, executive director of the Dais think tank at Toronto Metropolitan University. Federal regulations bring more certainty to businesses and investors, and the rules can even encourage AI adoption. Canadian companies have been slow to implement AI and the public has a low level of trust in the technology compared to G7 country peers. “We believe that legislation and governance at the national level can help address that,” he said.

The Liberal government introduced the Artificial Intelligence and Data Act (AIDA) in 2022 as part of Bill C-27, which focused on consumer data protection and privacy. AIDA set out broad principles for the development of algorithms and AI models, and the government touted the fact that Canada was one of the first countries in the world to propose such a law.

In hearings before a parliamentary committee, AIDA was assailed from all sides. Civil society groups decried what they saw as weak enforcement measures and the fact that AIDA was only a framework, with many specific regulations to be written later. Companies such as Meta Platforms Inc. said the definitions in the act were too broad and would lead to “overinclusion of AI systems with severe consequences for innovation.” Other people suggested AIDA should be split off the rest of Bill C-27 to allow for more debate, and they expressed confusion that the act proposed to regulate “high-impact” instances of AI without defining the term.

Innovation, Science and Economic Development said that AIDA was designed to be flexible in order to respond to a fast-moving technology, without putting a chill on development. Still, the government made numerous amendments, including provisions for identifying AI-generated content and more clarity around the definition of high-impact systems.

Mark Schaan, a former ISED official and one of the architects of AIDA, said at an event Wednesday that there are many examples of proposed legislation dying only to be resurrected later. He also emphasized that various protections are already in place that apply to AI. “There is a portion of the civic activist community that will say things like, ‘Well, this is just a completely unregulated technology that’s now running rampant,’” he said at a talk organized by law firm Gowling WLG in Toronto. “That’s actually fundamentally not true.”

Privacy legislation covers some data that is used to train AI models, while competition law governs companies developing and deploying AI, he said, adding that ISED has rolled out a voluntary code of conduct for generative AI. In the absence of federal legislation, he continued, private sector groups can develop their own standards. “There is absolutely nothing stopping the industrial community from coming together and thinking about a next level of granularity as it relates to the obligations that are currently part of the code of conduct,” he said.

The demise of AIDA provides an opportunity to reconsider how to approach legislation, including whether an AI-specific bill is even the best option, said Antoine Guilmain, a Gowling partner in Montreal. While not advocating for any particular approach, he pointed out that there are existing laws, self-regulatory regimes, professional organizations and industry regulators that can all play a role when it comes to setting AI guardrails. “It’s a trend globally,” he said of legislation, “but maybe in five or 10 years, we’ll see this was not the best course of action.”

Indeed, the Canadian government faces a choice of which trading partner to align itself with when it comes to AI. The European Union passed an AI Act last year, but the U.S. is moving in the opposite direction. The Republican Party platform for last year’s U.S. elections took aim at an executive order passed by the White House in 2023 that set out obligations for AI developers. “We will repeal Joe Biden’s dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology,” the platform reads.

“If we don’t do AI regulation, that’s going to pose challenges for Canadian businesses vis-a-vis Europe,” said Teresa Scassa, a law professor at the University of Ottawa. “If we do, there’s likely to be even more concern it might not be in alignment with what’s happening in the U.S.”

In Canada, it’s not clear how much enthusiasm there will be for an AI bill after a federal election this year. The Conservative Party is well ahead in polls, and in an election that is shaping up to be about housing, immigration and the cost of living, politicians are not exactly talking up a storm about AI governance.

“All it might take will be a few really ugly AI-related incidents before people start saying, ‘This just won’t do,’” Prof. Scassa said.

There could be movement at the provincial and state levels, but not without challenges. Ontario passed Bill 194 in November, which governs the use of AI by the public sector, though the bill has been criticized for being too weak. “Without statutory guardrails and explicit independent oversight, Bill 194 missed the opportunity to secure Ontarians’ trust in AI’s promise,” Patricia Kosseim, the province’s information and privacy commissioner, wrote recently.

In the U.S., California attempted to pass a bill that was supported by Canadian AI heavyweights Yoshua Bengio and Geoffrey Hinton, but vehemently opposed by the tech industry. Governor Gavin Newsom vetoed the bill in September.

“The veto of the California legislation was very telling,” Mr. Bardeesy said, “because that indicated that even progressive governments or administrations in California could not find a way to successfully implement AI regulation.”

Follow related authors and topics

Interact with The Globe