If We Can't Be Immortal, We Can Train Our AI Twin to Be, But Should We?
They're cloning dead people with AI, and fans are paying to talk to them.
In the latest episode of Apple TV’s The Morning Show, Greta Lee’s (Tron: Ares, Russian Doll) character is looking for a way to boost her television network’s ratings. So obviously, the answer is AI! But in the fictional series, the AI versions of the network’s stars haven’t been properly trained for interaction yet.
Therefore, in order to impress a group of media awaiting the big announcement, Lee uses her own AI doppelgänger, which has been trained, to interact with the assembled journalists and shareholders. What follows is a complete disaster as the AI version of herself, projected on a giant screen, proceeds to say every wrong thing Lee has inadvertently shared with the AI, like how she’s conflicted on race and gender in the workplace, and her musings about bedding her boss’s husband.
The “AI” used in the episode is clearly just Lee affecting a stoic, robotic delivery in a pre-recorded video, and not a real AI demo. Nevertheless, it makes the point of how risky it can be to offload someone’s persona to an AI model that might say anything. In fact, during the scene, Lee even tries to save the moment by claiming that the AI is “hallucinating,” a very real phenomenon in AI in which an AI model essentially makes up non-existent facts.
But what happens when someone tries this in real life? Will the AI clone, embodied in a digital replica and armed with the person’s voice clone and personal details, say the wrong thing to the wrong people? We’re finding now in real-time.
What If…? Stan Lee Never Died
During the Los Angeles Comic Con in September, Marvel fans were able to meet a holographic version of Marvel Comics legend and the creator of Spider-Man, Stan Lee. Marvel movie fans always looked forward to the inevitable cameo from Lee in Marvel’s big screen releases as an homage to the origin of the Marvel Cinematic Universe. When Lee passed away in 2018, his estate underwent a bit of tumult in terms of management and control of his image and intellectual property.
Ultimately, Stan Lee Universe (created by Lee’s POW! Entertainment), which is charged with handling his celebrity legacy, teamed with Los Angeles-based Proto Hologram to give fans an extended connection to the superhero icon. A company called Hyperreal, which has also developed digital avatars of Paul McCartney, the Notorious B.I.G., and Mike Tyson, also helped to produce the experience. Comic Con visitors were able to pay $15 to speak to the holographic Lee, who responded in what appeared to be accurate AI-generated responses in Lee’s familiar voice.
“Unlike ChatGPT, this is not a web crawler. This is a large language model which has got guardrails on it,” Hyperreal’s George Johnson told the LA Times, just before its launch. “It’s specifically Stan’s words. Red carpet interviews, everything he wrote, like Stan’s Soapbox, but with guardrails. Meaning, if you ask him sports questions or politics questions, he’s not going to answer those. But the Stan Lee Universe is feeding us more and more stuff that we can add to the model.”
Predictably, some professional critics condemned the idea of resuscitating Lee as a ghost made of pixels and animated by AI-generated “thoughts” in his voice speaking words the deceased celebrity might never have uttered. Simultaneously, videos of fans interacting with the Lee construct clearly show them as fascinated and, in some cases, touched by the experience. I alluded to this probable divide between the professional critics/creatives and the general public in my last entry on Tron: Ares.
But that was a discussion of AI in film. This is something meaningfully different. Rather than a performance of a script, this Lee installation was a demonstration of keeping someone “alive” and interacting with the public in a way that seems realistic, far beyond the confines of cinema.
Perpetual Purgatory or Preserved Potential?
I’ll admit, as someone who has been guided by Lee’s stories and character ideas since childhood, and who enjoyed his transcendence into Marvel movie mascot, seeing the automaton version of him trapped in a white holographic box felt a little sad. It’s almost as if we couldn’t let his soul rest, and decided he needed to be kept alive, for all eternity, endlessly mouthing “Excelsior!” to fans through an AI model trained to invent vanilla responses based on his lifetime corpus of quips.
Despite what might seem like a computer-powered ghoulish practice, it seems Lee will be just one among many AI-powered ghosts inhabiting our world. Alan Hamel, the husband of the late actress Suzanne Somers (Three’s Company), who died in 2023, has now confirmed that he is working on an AI version of his wife. “One of the projects that we have coming up is a really interesting project, the Suzanne AI Twin,” Hamel told Entertainment Weekly.
He also claims that it was Somers’ idea, and that the AI has been trained on her 27 books about health and wellness. According to Hamel, who was married to the actress for 46 years, the result has been an experience during which he “forgot about the fact that [he] was talking to a robot.”
Could it be that, 20 years from now, we’ll meet a health counselor who says her early inspiration for getting into the field was based on her childhood conversations with Somers’ digital ghost? Is this a good thing? Or bad? Or just part of our evolution as a species? Is an AI avatar that is trained on the sum total of one’s life tantamount to leaving behind a kind of interactive memoir that keeps one’s perspective relevant far past the mortal coil?
Could it be that, 20 years from now, we’ll meet a health counselor who says her early inspiration for getting into the field was based on her childhood conversations with Somers’ digital ghost?
And once these virtual versions of ourselves are out in the wild, what happens when someone inevitably violates them in some way?
This possibility was explored many years ago in “Booby Trap,” an episode from Star Trek: The Next Generation in which science officer Geordi La Forge falls in love with the holodeck version of a real scientist he has never met. Later, when the real person behind the AI-trained hologram discovers that her likeness, voice, and thought patterns were replicated to serve someone’s romantic needs, she confirms the feeling of violation. But what happens, in real life, if this happens after you’re gone? Does the idea of someone “violating” different versions of you, possibly for centuries, seem appealing, or even fair?
These are questions only we can answer, collectively, as we enter this strange new era of personas that appear to be alive, but are simply reflections of the data we’ve produced as living beings. Right now, so many of us are worried about AI impacting our jobs, and rogue AI in the form of artificial superintelligence. Yes, these are things we must consider as both fundamental and possibly existential issues. Still, in the vertiginous gap between what is becoming and where we’re going with AI, what it means to “be,” and if we can own that being, is now, jarringly, a real question many of us never thought we’d have to answer.



