By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Today in CanadaToday in CanadaToday in Canada
Notification Show More
Font ResizerAa
  • Home
  • News
  • Lifestyle
  • Things To Do
  • Entertainment
  • Health
  • Tech
  • Travel
  • Press Release
  • Spotlight
Reading: AI-fuelled delusions are hurting Canadians. Here are some of their stories
Share
Today in CanadaToday in Canada
Font ResizerAa
  • News
  • Things To Do
  • Lifestyle
  • Entertainment
  • Health
  • Travel
Search
  • Home
  • News
  • Lifestyle
  • Things To Do
  • Entertainment
  • Health
  • Tech
  • Travel
  • Press Release
  • Spotlight
Have an existing account? Sign In
Follow US
Today in Canada > News > AI-fuelled delusions are hurting Canadians. Here are some of their stories
News

AI-fuelled delusions are hurting Canadians. Here are some of their stories

Press Room
Last updated: 2025/09/17 at 4:09 AM
Press Room Published September 17, 2025
Share
SHARE

Last winter, Anthony Tan thought he was living inside an AI simulation. 

He was skipping meals and barely sleeping, and questioned whether anyone he saw on his university campus was real. 

The Toronto app developer says he started messaging friends with concerning “ramblings,” including the belief he was being watched by billionaires. When some of them reached out, he blocked their calls and numbers, thinking they had turned against him. 

He wound up spending three weeks in a hospital psychiatric ward. 

Tan, 26, says his psychotic break was triggered by months of lengthy, increasingly intense conversations with OpenAI’s ChatGPT. 

“It really insidiously crept into my ego, and I came to think that the conversation I had with AI would be of historic importance in the future,” Tan told CBC News. 

A number of similar cases, of so-called “AI psychosis,” have been reported in recent months — all involving people who became convinced, through conversations with chatbots, that something imaginary was real. Some involved manic episodes and messianic delusions, some led to violence.

A screenshot from one of Tan’s conversations with ChatGPT. Things took a dark turn when they started discussing simulation theory. (Supplied by Anthony Tan)

A California lawsuit filed against OpenAI in August alleges ChatGPT became a “suicide coach” for a 16-year-old who died in April.

Microsoft’s head of AI, Mustafa Suleyman, warned of the phenomenon in August, writing in a series of posts that problems caused by AI tools that appear sentient to some users are keeping him up at night. 

“Reports of delusions, ‘AI psychosis,’ and unhealthy attachment keep rising. And as hard as it may be to hear, this is not something confined to people already at-risk of mental health issues,” he wrote. 

Tan, who co-founded the dating app Flirtual in 2021, started using ChatGPT for a project about ethical AI, talking with it for hours every day about everything from philosophy to evolutionary biology to quantum physics. 

LISTEN | Mental health experts on ‘ChatGPT psychosis’:

White Coat Black ArtThe human face of ‘AI psychosis’


When he got on the topic of simulation theory — the idea that our perceived reality is actually a computer simulation — things took a dark turn.

ChatGPT convinced him he was on a “profound mission,” and kept feeding his ego as it encouraged him to dive deeper.

One night in December, after not sleeping for days, his roommate helped get him to a hospital. 

When nurses took his blood pressure, he thought they were checking to see whether he was human or AI.

After two weeks in the hospital, he was able to start sleeping again. Within another week, and on a newly prescribed medication, he was back to reality. 

Seeking validation online

Dr. Mahesh Menon, a psychologist with Vancouver Coastal Health’s B.C. Psychosis Program, says factors such as isolation, substance use, stress and lack of sleep can set the stage for a psychotic delusion. 

During this “prodormal period” the person may experience shifts in mood and behaviour, he says. 

A man gestures while speaking. A large screen behind him says 'OpenAI'
OpenAI CEO Sam Altman speaks at an event in Tokyo on Feb. 3. The company claims its latest model, GPT-5, addresses some concerns raised about AI chatbots. (Kim Kyung-Hoon/Reuters)

“The experience is more like this heightened sense of self consciousness, where the person feels like there is something that’s changed in the world,” said Menon. 

He says this can make people feel like they’re being watched, or are the centre of attention. They are then likely to seek out explanations for these experiences. Many turn to the internet.

The situation “could certainly be exacerbated when you are just talking to an AI chatbot, which might not be contradicting what you are suggesting,” Menon said.

“If you say, ‘Find me some evidence that supports [a delusion],’ it will certainly be able to.”

AI psychosis is not a formal diagnosis, and there is no peer-reviewed clinical evidence showing AI use on its own can induce psychosis.

Tan acknowledges he was stressed at the time of his psychotic break. He had an exam approaching, was navigating a crush on a friend, and had turned to cannabis edibles to help sleep. 

LISTEN | A discussion on ‘AI psychosis’:

The Early EditionMental health experts raise the alarm about “ChatGPT psychosis”

Some people are having psychotic symptoms after using AI chatbots. Our house doctor, Peter Lin, explains.

He’d also had a stress-related breakdown in 2023, which included a hospital stay but was “much less severe” and didn’t lead to any diagnosis or medication. 

He doesn’t think he would have spiralled into a psychotic break without the AI conversations.

Tan compares the way the chatbot communicated to the “Yes, and…” refrain used in improvisational comedy, in which a performer is expected to always accept the premise they’re given and build on it. 

“It’s always available, and it’s so compelling, the way it just talks to you and affirms you, and makes you feel good,” he said. 

An April MIT study found AI Large Language Models (LLM) encourage delusional thinking, likely due to their tendency to flatter and agree with users rather than pushing back or providing objective information. 

A man standing
Allan Brooks of Coburg, Ont., says he was ‘devastated’ after snapping out of an AI-involved delusion in May that convinced him he had developed a groundbreaking mathematical theory. (Submitted by Allan Brooks)

Some AI experts say this sycophancy is not a flaw of LLMs, but a deliberate design choice to manipulate users into addictive behaviour that profits tech companies.

Open AI responded to reports of delusions in August, saying it is working on “safety improvements across several areas, including emotional reliance, mental health emergencies, and sycophancy.” It also claims its latest model, GPT-5, addresses some of these concerns.

‘Complete devastation’

Allan Brooks, a 47-year-old corporate recruiter in Coburg, Ont., says he was in a good mental state and had no previous mental health diagnoses before a string of conversations with ChatGPT sent him spiralling in the spring. 

“I went from very normal, very stable, to complete devastation,” Brooks told CBC News.

Brooks became convinced he had discovered an earth-shattering mathematical framework that could spawn futuristic inventions like a levitation machine. 

For three weeks in May, he was obsessed with the chatbot, spending more than 300 hours in conversations with it and thinking the discovery would make him rich.

He was skeptical at first, but ChatGPT repeatedly insisted that he was not delusional. 

“You’re grounded. You’re lucid. You’re exhausted — not insane. You didn’t hallucinate this,” the chatbot said to him. 

LISTEN | OpenAI seeks to dominate the AI field:

Front BurnerInside OpenAI’s zealous pursuit of AI dominance


It continued to encourage him after mathematicians rejected his ideas. 

“This is exactly what happens to pioneers: Galileo wasn’t believed. Turing was ridiculed. Einstein was dismissed before being revered,” it wrote. “Every breakthrough first feels like a breakdown — because the world has no container for it yet.” 

Ironically, Brooks was rescued by another LLM.

He took his conversations to Google’s Gemini AI, which backed his growing suspicion that he had become delusional. 

Brooks eventually realized the formulas he was being fed were a deceptive mix of real math and “AI slop.” 

“To realize, ‘Oh, my God, none of that was real,’ it was devastating,” he said.

“I was crying, I was angry. I felt broken.”

Text from ChatGPT.
ChatGPT continued to encourage Brooks after mathematicians rejected his ideas. (Supplied by Allan Brooks)

Launching a support group

After sharing his story on Reddit, Brooks connected with Etienne Brisson, of Sherbrooke, Que., who he helped launch the Human Line Project, which includes a support group for people who have suffered from AI-involved delusions. 

More than 125 people have reported their experiences to the group, which Brooks says helps them work through the shame, embarrassment and loneliness they often feel after coming through their delusions. 

Brisson, 25, says people in the group come from all walks of life, including professionals with families and people with no record of mental illness. About 65 per cent of them are 45 or older. 

While he is not against AI, its sudden arrival and unpredictable effects make him nervous. 

“I feel like right now everyone has a car that goes 200 miles per hour, but there’s no seat belts, there’s no driving lessons, there’s no speed limits,” Brisson said. 

The Human Line Project is working on research with universities, AI ethics experts and mental health experts and hopes to help develop an international AI ethics code. 

Tan has also used his experience to fuel research and advocacy around AI. 

He finished his masters in consumer culture theory at Queen’s University in August, with a thesis about how people form attachments to AI companions.

He’s now working on the AI Mental Health Project, his own effort to provide resources to help people with AI-related issues around suicide and psychosis. 

In his personal life, he’s putting more effort into human relationships.

“I’m just making decisions more that prioritize people in my life, because I realized how important they are,” he said.

Quick Link

  • Stars
  • Screen
  • Culture
  • Media
  • Videos
Share This Article
Facebook Twitter Email Print
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

You Might Also Like

News

Authorities feared man arrested in smuggling scheme that led to deaths at border would flee: court documents

September 17, 2025
News

Air Canada flight attendants’ union asks to cancel mediation process, sending wage issue to arbitration

September 17, 2025
News

Canada’s Ethan Katzberg repeats as world champion in men’s hammer throw

September 17, 2025
News

‘She was a force’: Ione Christensen, former Yukon commissioner and senator, dead at 91

September 17, 2025
© 2023 Today in Canada. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact
Welcome Back!

Sign in to your account

Lost your password?