Skip to main content
Open this photo in gallery:

The rise in generative AI tools, such as OpenAI’s ChatGPT, has accelerated the proliferation of hyper realistic AI-generated images and video online.Matt Rourke/The Associated Press

AI-generated content, including videos posted on digital platforms, should be clearly labelled so people can tell what is real or fake, a House of Commons committee has concluded.

The Canadian Heritage committee, in a report presented to the House of Commons this week, also recommends that the scope of Canada’s copyright law be broadened to apply to AI-generated content, to protect the integrity of creators’ work.

The report says that prior consent should be required for the use of copyrighted work, including literature, art and music, in training AI models.

Its recommendation precedes the imminent publication of the government’s AI strategy by Artificial Intelligence Minister Evan Solomon.

The Commons committee report also recommends that Ottawa protect digital sovereignty by investing in Canadian AI infrastructure, a move that Mr. Solomon has signalled he is considering.

Canadians increasingly concerned over AI-generated content, survey finds

Artists have for years been urging the government to introduce legislation so that copyright rules apply to AI content, for example music generated to sound like the work of a songwriter, or paintings made by AI to mimic an artist’s style. The Copyright Act is designed to protect a work created by a human being.

At a three-day summit on AI and culture last month in Banff, Alta., attended by Mr. Solomon and Canadian Identity Minister Marc Miller, Canadian artists said legislation to promote copyright protection from AI use should be a priority.

To train AI systems, large quantities of data and information, including from copyright-protected works, are analyzed and reproduced to identify patterns and make predictions. But creators have complained that AI systems have used their work without their consent.

Half of the witnesses that the House committee heard from recognized the potential of various kinds of AI tools to enhance efficiency and creativity in cultural endeavours. But the report by MPs urged the government to regulate “the harmful outcomes of AI” to protect Canadians.

Some of those who appeared before the committee warned that AI threatens to “cross the line between serving human creativity and replacing it altogether.”

Videos generated using OpenAI’s Sora app by The Globe’s Samantha Edwards, putting herself in several scenes.

However, Eric Chan, artist and creator in residence at Library and Archives Canada, told the committee that AI is the “printing press of our era”, noting that “every leap in reproduction technology was called apocalyptic … Then it became infrastructure.” He said it should not be treated as “doomsday for creators.”

Most witnesses representing the creative industries told the committee that AI-generated creative content, such as art, should not itself receive copyright protection “without meeting a meaningful standard of human intervention.”

The committee heard that AI can generate unreliable and misleading or incomplete information. Labelling could help people clearly distinguish content “to protect the value of human creative work” and promote transparency, the report said.

Taylor Owen, founding director of McGill university’s Centre for Media, Technology and Democracy, said he supported mandatory watermarking of AI-generated videos and images, and labelling of AI content by social-media platforms so people can differentiate between machine-generated material and that created by human beings.

“In order to help us adapt to this amount of AI content that’s now in our world, we need to be able to know,” he said. “I think it’s critically important.”

Sora’s uncanny AI makes it difficult to distinguish real from fake

The Commons committee report also recommended that the federal government should require greater transparency from AI developers about the use of copyrighted works to train their models, including disclosure of the sources of training data, to enable proper authorization and licensing.

A number of lawsuits have been launched where creators are suing tech companies for using their copyrighted works to train AI.

Google had a class-action lawsuit filed against them in the U.S. in 2024 seeking compensation for visual artists and authors whose registered copyrighted works were used by the tech giant without authorization to train and develop its generative AI models.

A lawsuit has also been filed against OpenAI, challenging its use of copyrighted literary, dramatic, musical and artistic works.

Media organizations including The Globe and Mail and CBC sued OpenAI in November, 2024, for allegedly violating copyright law by scraping proprietary news content without consent or payment to train its models, such as those that power ChatGPT. In addition, the lawsuit makes claims of breach of contract and unjust enrichment against OpenAI.

The plaintiffs also include Postmedia Network Inc., Toronto Star Newspapers Ltd., Metroland Media Group, The Canadian Press and Radio-Canada. The lawsuit was filed with the Ontario Superior Court of Justice.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe