Skip to main content
Open this photo in gallery:

Canada's Privacy Commissioner Philippe Dufresne presented the findings of a probe into OpenAI with provincial counterparts in Ottawa on Wednesday.Adrian Wyld/The Canadian Press

A three-year-long investigation by federal and provincial privacy regulators found that OpenAI violated laws when handling personal information for the initial release of ChatGPT, but said the San Francisco-based company has made changes to address major concerns.

The report released on Wednesday found that OpenAI collected vast amounts of personal information without adequate safeguards and valid consent, with many users unaware that their data were captured and used to train AI models.

Federal and provincial regulators also faulted OpenAI for not giving Canadians an easy and effective way to correct and delete personal information, and for releasing ChatGPT without first addressing known privacy risks. OpenAI did not provide adequate notice about inaccuracies in ChatGPT responses either, according to the report.

Sam Altman sowed ‘chaos,’ distrust among top OpenAI executives, former technology chief testifies

OpenAI’s practices have changed since the Office of the Privacy Commissioner of Canada first launched an investigation into ChatGPT in April, 2023, in response to a complaint. (Privacy regulators from Quebec, Alberta and British Columbia joined the probe soon afterward.)

Some of the changes include filtering to detect and mask personal information, technical tools to block ChatGPT from revealing personal details about specific public figures and a formal data retention and deletion policy.

OpenAI also agreed to make several other changes in the coming months, including publishing more information about its privacy policies and the sources of content used to train its models. The company will better inform individuals who are signed out and using the web version of ChatGPT that their conversations may be used to train future AI models and advise them not to share sensitive information.

“I’ve concluded that the measures that have been and that will be implemented by OpenAI will address the concerns identified during the investigation,” said Philippe Dufresne, Privacy Commissioner of Canada, at a press conference on Wednesday.

A spokesperson for OpenAI pointed to a new blog post explaining the company’s approach to privacy for Canadian users. “We care very deeply about protecting our users’ privacy,” spokesperson Shane Bauer said in an e-mail.

While Quebec’s privacy watchdog has the power to implement monetary penalties for violations, it opted not to in this case. “We have decided to make recommendations instead,” said Naomi Ayotte, vice-president at Commission d’accès à l’information du Québec.

Teresa Scassa, a law professor at the University of Ottawa, noted that the report focuses on resolution to the privacy issues identified by the regulators. “There is real value in this in the sense that it helps to advance privacy protection, with co-operation and engagement from industry,” she said. “This is like a negotiated solution to a thorny problem.”

Prosperity's Path: OpenAI has shown it cannot be trusted. Canada needs nationalized, public AI

The generative AI models that power applications such as ChatGPT are built on large amounts of data scraped from the public internet, as well as from content that AI companies pay to license. Personal information culled from social media, blog posts and elsewhere can be captured as part of the process.

Companies filter data to remove personal information and harmful content before training models, and apply other techniques to ensure that models don’t memorize specific details. AI companies also teach chatbots to refuse requests for personal information about specific individuals when asked by users.

Since OpenAI released ChatGPT in late 2022, industry practices have continued to evolve. The regulators’ report notes that OpenAI retired the AI models that were initially subject to their investigation, a practice that is now common when developers release a new one. The third-party filtering tool OpenAI previously used removed only a subset of data that could constitute personal information, and new methods “significantly” reduce the amount of sensitive details used in training, according to the regulators’ report.

One direct outcome of the investigation, according to Mr. Dufresne, was that OpenAI implemented a retention and deletion framework for personal information. “They didn’t have a clear retention schedule at the outset, and now they do,” he said.

Michael Geist, a law professor at the University of Ottawa, said the report shows that policy makers are struggling to keep up with AI. “This decision involves activities that are far in the rearview mirror, with increasingly irrelevant models,” he said.

Vass Bednar: AI products are defective and dangerous. Why are we using them at all?

Indeed, the federal and provincial privacy commissioners were vocal on Wednesday about the need for updated tools. Federally, the Liberal government introduced a new privacy and data bill in 2022 but it died when Parliament was prorogued in January, 2025. A new version has yet to be introduced.

“Modernizing Canada’s privacy framework remains a priority for this government,” federal AI Minister Evan Solomon said in a statement. “The technology landscape is evolving rapidly, and Canadians deserve a comprehensive framework that keeps pace.”

One concern is how to obtain valid consent for data collection when AI companies scrape the open internet. “It doesn’t make sense for the most part to say AI models need to obtain explicit individual consent,” said Emily Laidlaw, associate law professor at the University of Calgary. “One has to think about it as not about consent, but what kind of principles and accountability measures should be in place for AI.”

Diane McLeod, Information and Privacy Commissioner of Alberta, said on Wednesday that expanded oversight, monetary penalties and requiring impact assessments before new technology is released are “controls that would ensure that those adequate privacy protections remain in place” while also allowing innovation to occur.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe