Listen to this article
Estimated 5 minutes
The audio version of this article is generated by AI-based technology. Mispronunciations can occur. We are working with our partners to continually review and improve the results.
OpenAI did not respect Canadian privacy laws when it trained its immensely popular ChatGPT tool, resulting in the collection and use of sensitive personal information, according to a joint investigation.
The federal privacy commissioner and his counterparts in Quebec, British Columbia and Alberta outlined their findings Wednesday morning into ChatGPT— a chatbot launched in 2022 that generates conversational, human-like responses when users type in questions or tasks.
The privacy watchdogs’ started their probe in 2023 following a complaint that the company unlawfully collected, used and disclosed personal information without consent.
According to their review, they identified “several concerns that led us to find that the way in which OpenAI had initially trained ChatGPT did not respect federal and provincial privacy laws.”
They found OpenAI gathered vast amounts of personal information without safeguards to prevent use of that information to train its models.
“This could include sensitive details such as individuals’ health conditions and political views, as well as information about children,” said their report.
It also found many users were unaware that their data was collected and used to train ChatGPT.
A joint investigation found OpenAI did not follow Canadian privacy laws by collecting and using sensitive personal information while training its ChatGPT tool. ‘There was a sense that [OpenAI] had to move quickly, but we found that problematic,’ said Privacy Commissioner of Canada Philippe Dufresne.
“OpenAI launched ChatGPT without having fully addressed known privacy issues. This exposed Canadians to potential risks of harm such as breaches and discrimination on the basis of information about them,” said federal commissioner Philippe Dufresne’s prepared remarks Wednesday.
Dufresne said there was a “lack of accountability” from OpenAI about why it launched a product that didn’t follow Canadian law.
“We have some some statements from leaders of the organization at the time saying, ‘We felt we had to move, we knew that there were others out there and so we launched it,'” he said.
“We found that problematic.”
OpenAI says it has a ‘deep responsibility’
The company expressed its disagreement with the findings, according to the report, and asserted that it was compliant with the various privacy acts “in most respects.”
The privacy watchdogs said following their investigation, OpenAI did take steps to improve ot privacy protections and has agreed to implement further measures to address their concerns.
On Wednesday, the company published a long explanation of how Canadians’ data might be used in model training. It said it only uses information that is freely and openly accessible, and uses a privacy filter to mask personal information in text.
“People are using ChatGPT in increasingly personal ways, including for questions and tasks that can touch sensitive parts of their lives. We recognize the deep responsibility that comes with that trust,” said the post.
“We also recognize that protecting privacy and addressing serious risks of harm have to work together. We take that responsibility seriously, and we continue to strengthen how we detect and respond to credible threats of violence while maintaining privacy safeguards.”
Need to modernize Canada’s laws: privacy czar
Despite the assurances Dufresne said the case reinforces the need to modernize Canada’s privacy laws.
“As AI is increasingly being integrated into personal and professional applications and while currently laws apply to AI, updated laws would help further support the safe deployment of new technologies to protect Canadians’ fundamental right to privacy,” he said.
Conservative Leader Pierre Poilievre said he supports reviewing Canada’s privacy laws “to make sure they’re matched with the times.”
The investigation predates the fatal shooting in Tumbler Ridge, B.C. in February, but comes amid calls for the government to introduce regulations targeting AI chatbots.
Seven lawsuits on behalf of those killed or injured in the rampage have been filed in California accusing OpenAI and its co-founder Sam Altman of negligence.
Lawyers with the firm Rice Parsons Leoni & Elliott say the Tumbler Ridge shooter’s ChatGPT account was banned for “disturbing content,” which allegedly included planning violent scenarios, prior to the February tragedy.

“However, despite some 12 different OpenAI employees imploring the company to notify Canadian law enforcement about the shooter’s plans, nothing else was done,” the firm said.
Late last month Altman wrote an apology letter to the community for failing to alert RCMP about the account of the Tumbler Ridge shooter.
Dufresne says a ban isn’t the answer
The federal government has said it’s reviewing whether the use of chatbots and social media should be age-restricted. Last year Australia implemented a first-of-its kind ban on youth under the age of 16 using major social media services including Tiktok, X, Facebook, Instagram, YouTube, Snapchat and Threads.
Asked if he would support a ban, Dufresne said a balance needs to be struck.
“The first step need not necessarily be a ban. I think the first step should be, can we fix the underlying issue? Can we make it more privacy protective?” he said.
“I think the goal is to reach this balance where you’re protecting children, but you’re also giving them the ability to evolve in this increasingly digital world.”


