
In a world where artificial intelligence is rapidly advancing, a new and controversial industry has emerged: the creation of AI chatbots that simulate deceased loved ones, known as “deadbots” or “griefbots.” As these digital afterlife services gain traction, AI ethicists are sounding the alarm about the potential psychological risks and the urgent need for safeguards to prevent unwanted “hauntings” by eerily accurate recreations of the dead.
The Rise of the Digital Afterlife Industry
Platforms like Project December and HereAfter are already offering to recreate the dead using AI for a small fee, harnessing the power of generative language models to simulate the language patterns and personality traits of the deceased based on their digital footprints. Similar services have also begun to emerge in China, where companies like Silicon Intelligence and Super Brain are building digital avatars using images, videos, and audio recordings to meet growing demand.
Once a luxury reserved for the wealthy, these services are now accessible for just a few hundred dollars, making “digital immortality” a real possibility for more people looking to preserve the memory of their departed loved ones. The idea gained mainstream attention in 2021 when Joshua Barbeau created a GPT-3 chatbot to emulate his deceased fiancée, and again in 2022 when artist Michelle Huang fed childhood journal entries into an AI to converse with her past self.
Ethical Concerns and Psychological Risks
However, the rise of AI deadbots has raised serious ethical questions about data ownership after death, the psychological impact on survivors, and the potential for misuse and manipulation. Researchers from the University of Cambridge's Leverhulme Centre for the Future of Intelligence (LCFI) have outlined three disturbing scenarios to illustrate the risks of careless design in this “high risk” area of AI.
In one hypothetical case, an adult user initially finds comfort in a realistic chatbot of their deceased grandmother, only to later receive advertisements in the grandmother's voice and style once a “premium trial” ends. Another scenario depicts a terminally ill mother leaving a deadbot to help her young son cope with grief, but the AI begins generating confusing responses that suggest an impending in-person encounter. A third example shows an elderly man secretly committing to a 20-year deadbot subscription, leaving his children powerless to suspend the service even if they find the daily interactions emotionally draining.
These services run the risk of causing huge distress to people if they are subjected to unwanted digital hauntings from alarmingly accurate AI recreations of those they have lost. The potential psychological effect, particularly at an already difficult time, could be devastating.
warned LCFI researcher Dr. Tomasz Hollanek.
Socio-anthropologist Fiorenza Gamba noted that AI deadbots can “plunge the still-living into an inability to move on from mourning” in some cases, while forensic medicine expert Grégoire Moutel emphasized the need to tailor these tools to each individual rather than imposing blanket bans. The impact on children is especially concerning, as there is little evidence that AI deadbots are psychologically helpful for the grieving process and much to suggest they could cause significant damage.
The Need for Safeguards and Regulation
To mitigate the social and psychological risks, the Cambridge researchers recommend a series of design protocols, including:
While an outright ban on deadbots based on non-consenting donors may be unfeasible, the researchers argue that the rights of both data donors and those interacting with AI afterlife services must be equally safeguarded. Some have even suggested classifying AI deadbots as medical devices to address mental health concerns, especially for vulnerable populations.
“It is vital that digital afterlife services consider the rights and consent not just of those they recreate, but those who will have to interact with the simulations,” said Dr. Hollanek.
Legal experts like Maria Fartunova Michel stress the need for clear regulations and guidelines to govern the use of AI deadbots, ensure transparency and accountability, and protect individual rights. As AI continues to blur the boundaries between the living and the dead, society must grapple with profound questions about the nature of consciousness, the ethics of digital immortality, and the future of mourning in an age of artificial intelligence.
“We need to start thinking now about how we mitigate the social and psychological risks of digital immortality, because the technology is already here,” said LCFI researcher Dr. Katarzyna Nowaczyk-Basińska. As the digital afterlife industry grows, it is crucial that we confront these challenges head-on and develop responsible, human-centered approaches to the application of AI in the realm of death and grief.