Suzanne Somers AI Clone Debuts Two Years After Her Death

Suzanne Somers died two years ago, but her husband, Alan Hamel, insists that fans will be able to interact with her again via an AI clone he says is “amazing.”

Hamel and Somers were together for 55 years before she died in 2023 from breast cancer.

“It was Suzanne. And I asked her a few questions and she answered them, and it blew me and everybody else away,” the 89-year-old widower told People of the AI-generated clone meant to resemble his late wife. 

“When you look at the finished one next to the real Suzanne, you can’t tell the difference. It’s amazing. And I mean, I’ve been with Suzanne for 55 years, so I know what her face looks like. When I just look at the two of them side by side, I really can’t tell which one is the real and which one is the AI,” he added.

Hamel went on to explain how he and the developers created the AI version of Somers by using “all of Suzanne’s 27 books and a lot of interviews that she has done.”

Join us now during our exclusive Deal of the Decade. Get everything for $7 a month. Not as fans. As fighters. Go to DailyWire.com/Subscribe to join now.

Hamel also insisted that Somers wanted the project to happen and had discussed it before her passing. 

“It was Suzanne’s idea. And she said, ‘I think we should do that.’ She said, ‘I think it’ll be very interesting and we’ll provide a service to my fans and to people who have been reading my books who really want and need information about their health,’” he told the outlet. “So that’s the reason we did it. And so I love being able to fulfill her wish.”

People magazine reports that fans will soon be able to interact with an AI version of Somers on her website.

“There’ll be people who will ask her about their health issues, and Suzanne will be able to answer them. Not Suzanne’s version of the answer, but it’ll go directly to the doctor she interviewed for that very issue, so it’ll be coming from an MD,” Hamel said.

“The first time I spoke to Suzanne AI, for the first two or three minutes, it was a little strange,” he added. “But after that, I forgot about the fact that I was talking to a robot and asking her questions and getting answers, and it happens that fast for me, getting used to the whole idea.”

“I feel really good about being able to deliver what Suzanne wanted and doing so that it’ll be something that basically will, should, go on for generations. I think our family loves the idea, really loves the idea,” he concluded.

A Surprising Mix Of Public Figures Call For Halt To ‘Superintelligence’ AI That Could ‘Outperform All Humans’

Hundreds of public figures — including tech leaders, celebrities, media personalities, and politicians — signed a statement released on Wednesday calling for an immediate pause on the development of advanced artificial intelligence, referred to as “superintelligence.”

Superintelligence technology is currently being pursued by tech giants such as Mark Zuckerberg’s Meta, Sam Altman’s OpenAI, and Elon Musk’s xAI. These companies hope to build “superintelligence in the coming decade that can significantly outperform all humans on essentially all cognitive tasks,” according to a preamble to the statement. The preamble points to concerns “ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction.”

The “Statement on Superintelligence” itself, which now has more than 1,000 signatures, consists of just 30 words: “We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.”

High-profile signers include leaders in the tech industry, such as Apple co-founder Steve Wozniak and AI pioneers Yoshua Bengio and Geoffrey Hinton.

In a statement accompanying his signature, Bengio wrote that superintelligence “could surpass most individuals across most cognitive tasks within just a few years.”

“These advances could unlock solutions to major global challenges, but they also carry significant risks,” he added. “To safely advance toward superintelligence, we must scientifically determine how to design AI systems that are fundamentally incapable of harming people, whether through misalignment or malicious use. We also need to make sure the public has a much stronger say in decisions that will shape our collective future.”

Join us now during our exclusive Deal of the Decade. Get everything for $7 a month. Not as fans. As fighters. Go to DailyWire.com/Subscribe to join now.

The “Statement on Superintelligence” was promoted by the Future of Life Institute, a group that aims “to steer transformative technologies away from extreme, large-scale risks and towards benefiting life.” Future of Life Institute President Max Tegmark said he reached out to all the CEOs of major AI developers, asking them to sign the document. However, he added that he did not expect them to endorse the statement, the Associated Press reported.

“I really empathize for them, frankly, because they’re so stuck in this race to the bottom that they just feel an irresistible pressure to keep going and not get overtaken by the other guy,” Tegmark said. “I think that’s why it’s so important to stigmatize the race to superintelligence, to the point where the U.S. government just steps in.”

Former U.S. National Security Adviser Susan Rice and former Chairman of the Joint Chiefs of Staff Adm. Mike Mullen also signed the document, as did multiple former Democratic and Republican members of Congress. Prince Harry, Duke of Sussex, and his wife, Meghan, Duchess of Sussex, were among other high-profile signers.

The statement was also signed by conservative media personalities Glenn Beck and Steve Bannon. Numerous faith leaders endorsed the statement, such as Paolo Benanti, a Papal AI advisor and Catholic priest; Johnnie Moore, the president of the Congress of Christian Leaders and a White House evangelical adviser; and Andrew T. Walker, the Associate Professor of Christian Ethics and Public Theology at The Southern Baptist Theological Seminary.

Both Altman and Musk have warned about potential major consequences to the development of advanced AI. In a blog post 10 years ago, Altman wrote, “Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity.” Musk has similarly discussed the risks that come along with advanced AI, saying earlier this year that he believes there’s “only a 20% chance of annihilation,” CNBC reported.

About Us

Virtus (virtue, valor, excellence, courage, character, and worth)

Vincit (conquers, triumphs, and wins)