
Privacy Commissioner Philippe Dufresne says using personal information without consent to create deepfakes 'is a growing phenomenon that poses serious risks to individuals’ fundamental right to privacy.'Justin Tang/The Canadian Press
Canada’s privacy watchdog is investigating the Elon Musk-owned company X after widespread reports of users generating sexualized deepfakes through the social-media platform’s AI chatbot, Grok.
The investigation, announced on Thursday, will examine whether X’s parent company, X Corp, is meeting its obligations under Canada’s federal privacy laws and whether valid consent from individuals was obtained for the collection, use and disclosure of their personal information to create deepfakes, including explicit content.
“The use of personal information without consent to create deepfakes, including intimate images, is a growing phenomenon that poses serious risks to individuals’ fundamental right to privacy,” said Philippe Dufresne, the federal Privacy Commissioner, in a statement.
Ottawa not banning X despite sexual deepfake controversy, AI minister says
Since late December, non-consensual, explicit images generated by artificial intelligence of women and girls have flooded the platform, sparking global outrage and investigations from governments around the world.
The Privacy Commissioner’s investigation is an expansion of an initial probe, launched in February, 2025, into X’s compliance with federal privacy law with respect to the platform’s collection, use and disclosure of Canadians’ personal information to train its AI models.
Canada is the latest country to launch an investigation into X. Indonesia and Malaysia have blocked access to Grok, while Britain’s media regulator Ofcom launched its own investigation.
After originally minimizing the scale of the issue of deepfakes on the platform, Mr. Musk said last week X would restrict the image-generation tool to paying subscribers. But that move did little to appease regulators around the world.
Late Wednesday, X announced it was blocking Grok from generating sexualized and naked images of real people on its platform in certain locations to comply with local laws. The company did not say in which jurisdictions the tool would be disabled.
Canada’s federal laws cover child sexual-abuse material, known as CSAM, whether it’s real or fictionalized content, but the current legislation does not include digitally altered intimate images of adults. Last month, Justice Minister Sean Fraser introduced an amendment to the Criminal Code that would include punishments for sharing non-consensual deepfakes of adults.
Investigations through the Office of the Privacy Commissioner are one of the few ways the federal government can hold social-media companies accountable, said Suzie Dunn, assistant professor at the Schulich School of Law at Dalhousie University who studies tech-facilitated gender-based violence.
However, she noted that probes by the Privacy Commissioner can be a lengthy process – the commissioner’s investigation into TikTok took more than two years – which is why many child safety experts are advocating for broader online safety laws.
“Advocates are pushing for things like content moderation legislation that would require social-media companies and AI companies to have at least good guardrails and safety mechanisms in place to protect against these incidents in the first place,” said Prof. Dunn. “Companies have a lot more power to address these issues quickly than we do in the legal landscape.”
Canada has made efforts to revise and introduce new legislation relating to social-media companies. Bill C-27, the Digital Charter Implementation Act, was a 2022 federal bill that aimed to update rules around data use and AI, but it languished when Parliament was prorogued in January, 2025. Bill C-63, the Online Harms Act, was designed to hold social-media companies accountable for harmful content posted on their platforms. It also failed to pass before the 2025 federal election.
Yet Canada’s privacy and data laws, which were last meaningfully updated in 2000, six years before the launch of Twitter, are not fit to regulate the current era of social-media platforms, either, said Matt Malone, a fellow at the Balsillie School of International Affairs who researches tech policy.
Marsha Lederman: Thanks to Grok, the internet is even less safe for women
“Nothing better epitomizes our failure to recognize how the ownership and control of data are cornerstones of power, security and well-being in the 21st century than this unwillingness to pass privacy and data protection laws,” he said.
Canada’s current privacy laws are largely consent based, allowing companies to obtain broad permission to collect data and personal information through lengthy terms of service that users accept by checking a box. Instead, Mr. Malone said that privacy laws should move toward a model that puts limits on the end use of private user information, rather than allowing companies to collect wide swaths of data that can be repurposed for different uses.
And under the current laws, Mr. Malone said investigations from Canada’s privacy watchdog fail to hold social-media companies accountable.
“Long after the findings arrive, the platforms have changed. The models have changed. The norms have shifted. So how do these investigations that take years to complete actually meaningfully protect anyone? That’s the real problem,” he said.