A Canadian’s plan to bring his late celebrity wife back to life in AI form has raised a host of ethical questions about prolonging human existence in the digital universe.
Alan Hamel, a Canadian entertainer and longtime TV personality, recently told People he’s created an “AI twin” of Hollywood star Suzanne Somers, his wife and partner of 55 years who passed away in 2023 from breast cancer.
Hamel, 89, told People the AI bot was trained on Somers’s books and interviews, “so that she’s really ready to be able to be asked any question at all and be able to answer it.”
He said the bot was Somers’s idea and they talked about it for years before she died. He said her AI likeness will exist on her website in the near future, where it will converse around the clock with fans who miss her.
“When you look at the finished one next to the real Suzanne, you can’t tell the difference,” he told People.
CBC News reached out to Hamel but had not heard back by time of publication.
’20 million layers of complexity’
Brazil-based AI ethicist Catharina Doria says it’s important to tread carefully around bringing someone back in AI form.
While people have made AI videos of dead celebrities without their consent for entertainment purposes — late comedian Robin Williams’s daughter said she was “disgusted” to see videos using his likeness last month — Doria says this scenario is more complicated.
“I think it’s really hard to have an answer as to if this is right or wrong, or if this is good or bad. I think there’s 20 million layers of complexity,” she told CBC News.
Doria says she worries many people struggle to decipher what’s real online, and that could cause unforeseen problems when people interact with bots like the AI Somers.
She cites the example of an American megachurch pastor recently using an AI video of late conservative pundit Charlie Kirk to make him say things he never said in real life, leaving viewers confused. Despite the disclaimer that it was AI generated, many online commenters were unsure whether it was a real video of Kirk, or whether it was using audio from something he said when he was alive.
“We have to really think — are people knowledgeable and literate enough about AI and generative AI to understand that that person, that thing, whoever is speaking on the other side, is cosplaying a person?” she said.
“There’s an AI literacy conversation that I think needs to be had.”
Doria says the ability to make AI versions of people who have died plays into the “loneliness pandemic” and could make people fall deeper into isolation.
She says this is similar to apps like Character.AI, which allows people to talk with bots that roleplay as celebrities and fictional characters, and companion bots, which some people form romantic relationships with.
“The fear that I, as an AI ethicist, and other experts have, is that that will push people away from society and the world and actual human love,” she said.

‘Deadbots’ unregulated
Companies like Eternos, StoryFile and HereAfter AI are already capitalizing on making realistic AI avatars for people who have lost loved ones.
With access to the dead person’s social media logins, these businesses can create “deadbots,” also known “griefbots” or “AI ghosts,” which emulate their personality.
Cambridge University researchers have raised concerns about deadbots, including that they could be used by companies in the future to feed users ads or spam loved ones with unsolicited notifications.
Jason Millar, Canada Research Chair in the Ethical Engineering of Robotics and AI at the University of Ottawa, says it’s time for people to start considering how they will manage their digital presence after they die.
“This just adds another layer of complexity to that conversation, given that there’s this possibility of kind of reanimating the dead in ChatGPT form,” he told CBC News.
Millar says he understands the appeal of digitally cloning a loved one, but has concerns about people missing out on the grieving process and ultimately blocking themselves from healing and future happiness.
He says this also raises a host of ethical questions, particularly in a case such as Somers, where her AI avatar will potentially be conversing with large numbers of people.
For example, Millar asks, what if the creator eventually wants to turn it off? And who has the right to do so if other people have become attached to it?

He says he also worries that this is playing out in a space that is largely unregulated.
“I think a lot of people might be uncomfortable thinking about these kinds of issues, but I don’t think we can ignore them anymore,” he said.
“There is absolutely no regulation out there to prohibit anyone from doing this right now, that I know of.”
AI tech becoming easier to use
James Hutson, head of human-centered AI programming and research at Lindenwood University in Missouri, says the Somers AI twin blurs the line between a deadbot and a bot that’s used for commercial or entertainment purposes.
But he expects the trend of using AI to reanimate the dead will continue as the technology becomes easier to use.
Hutson sees it as a natural progression of the human tendency to preserve our relationships with lost loved ones, noting that even in the Middle Ages, people would make wax masks of the dead.
“The ability to maintain connection with our loved ones after death is fundamental to to human history and culture,” he said.

Hutson studies people’s perceptions of AI-powered avatars, and has found a vast majority draw the line at “embodied” AI, or the uploading of a deadbot into a physical robotic form.
But as these technologies become normalized, he says, that could change.
“That’s the next step, right? Do you want your digital consciousness, so to speak, to live on in the material world in some type of robotics?” he said.

