Posted in
Tech, AI
Griefbots, human consciousness and the ouroboros of the machine
Text Nada Meshal
I honestly couldn’t even tell you what prompted me to explore the notion of griefbots and the associated ethics of engaging with them, just that something told me it would offer a very specific lens into the big question of Human Consciousness versus (and?) The Machine.
How could something as tortured, yet also somehow tedious at the same time, as the process of grieving, give us insight into the role that AI can, and should be able to, have in our lives?
Years ago, I picked up a book called Fully Automated Luxury Communism by Aaron Bastani. An overambitious utopian manifesto that imagines a future where machines liberate us from drudgery, allowing humans to pursue the truly meaningful: care, creativity, community, and spiritual evolution.
While often criticised for its vague implementation strategy and underestimation of power structures, the book nonetheless imprinted a baseline belief in me: that AI should only ever be used as a tool to extricate us from the mundane, not to entangle itself in the sentient. In other words, its role is not to become consciousness, but to protect the conditions under which consciousness evolves.

As AI has evolved, however, so has our appetite for simulation, with no real understanding of the consequences inherent to the machine becoming the site of meaning itself. And nowhere is this clearer, or more disturbing, than in the realm of grief technologies.
Grief technology, often referred to as ‘griefbots,’ describes AI systems trained on a person’s digital traces, such as texts, voice notes, emails, and videos, to generate an interactive simulation of them after their passing. These programs attempt to preserve the patterns of a person’s presence, offering conversations, expressions, and even mannerisms that attempt to capture them, while raising difficult questions about the limits of what technology can really resurrect.

The rise of grief tech like Project December (Jason Rohrer, 2021), HereAfter AI, and Replika marks a troubling threshold beyond merely that of technological possibility, but of existential direction. In a profoundly haunting South Korean documentary, Meeting You, a grieving mother dons a VR headset to “reunite” with her deceased daughter: a chilling display embodying the transformation of the archive (uncanny and disjointed in its attempts to mimic the real) into a surreal, quantum theatre of loss.
If grief, one of the most human experiences available to us, is sought to be alleviated or “handled” by AI, we don’t gain more time for the advancement of our phenomenal awareness, we abandon it. We erode our ability to metabolise loss, and to actually contend with the transmutation it brings.
Derek Parfit’s theory of personal identity, particularly his Relation R, claims that what matters in continued existence isn’t an essential self, but rather, a psychological continuity: the chain of memories, habits, and emotional traits that link one moment of a person to another. If enough of those psychological links persist, we are still “ourselves,” even if the body or identity shifts, and griefbots attempt to feign this continuity. By being fed speech patterns, texts, voicenotes, and recorded behaviour, they generate a kind of posthumous avatar that appears to maintain the “thread” of the person, in what feels like a protraction of the deceased.

But is psychological resemblance enough to recreate ontological existence? Or does it merely construct a hallway of mirrors, endlessly echoing the Real in some absurd distortion?
According to Lacan, the Self is first formed when the infant sees its reflection and misrecognises it as a coherent, unified self-image. This is the Mirror Stage: the beginning of ego formation, based on the illusion of internal wholeness despite the reality of intramural fragmentation. The reflected image becomes the template for identity and forms the lifelong tension between the self that we are (the real) and the self that we imagine ourselves to be (the desired).
In this light, griefbots become less about the dead and more about us; perhaps more accurately described as ego technologies.
They can therefore be seen as inhabiting the Imaginary Order in Lacanian terms: a realm of images and illusions. They offer the bereaved a reflection: not of the lost loved one, but of longing itself. A coherent enough image to trick the psyche into emotional continuity, even when we know it’s not real.
As an attempt to answer the question of the ethics of postmortem AI, I tried to think beyond the obvious observation that the very nature of their existence, and their consequent venture, is to prompt the eager user to continue engaging through the manipulation of their corporeality.
My thoughts grew legs and I found myself confronting this distant idea that in the machine’s attempt to continue learning and operating, it becomes a technological ouroboros: eroding its own credibility and evolution by accepting and computing prompts fed to it by a grasping individual, desperate to accept any reality beyond their own.

Describing the experience of using a griefbot, my friend Omar likens it to “speaking with a broken clone.” Like something wearing his friend’s skin. “Sometimes,” he admits, “it got close,” but the closeness only made the horror sharper. “You think you’re drinking water,” he continues, “and it turns to sand. It wasn’t him. Just a sad disappointment. Like a stillbirth.” Pausing, he reflects, “We have death for a reason. Our brains aren’t built for immortality.”

As an avid science and speculative fiction fan, he doesn’t approach the bot from a place of delusional hope or desperation, but rather, with a curiosity towards the machine and its attempted evolution towards sentience. “If a placebo gets the job done,” he asks, “does it matter that it’s a placebo?”
What is the function of grief if not to completely destroy us and ask us to rebuild with our bare and bruising hands? The human experience offers no more prodigious confrontation with the Real. It reminds us of impermanence, the limits of our control, and the necessity of loss for meaning to hold weight. In bypassing that process, we risk stunting the very psychic evolution that grief demands.
In his introduction to the concept of narcissism, Lacan writes that the image of one’s own body is sustained by the image of the Other, and that we only truly feel “whole” by seeing ourselves through their reflection. Understanding this, I can’t help but suspect that grief itself is less about honouring this Other, than it is about preserving some cherished version of ourselves that existed only in their gaze.
That might be the deepest point here. Not whether AI can emulate consciousness. But why we would want it to. Perhaps the project of griefbots is not about loving the Other, but about keeping the Self intact. “I persist because my world of mirrors persists.”Grief, in this light, is not only a wound of loss but also a mirror stage in reverse: we sustain the image of the other to help stabilise our own.

“Man’s desire is the desire of the Other.”
The griefbot speaks in the voice of the Other, but it doesn’t desire.
Deleuze & Guattari call such constructs desire-machines: tools that channel and repackage human longing without resolving it. Griefbots, in this frame, do not soothe. They replicate the structure of yearning without catharsis, deferring closure indefinitely.
Omar warns of a further horror potential: “Imagine being stuck on the cloud, trapped, aware, and everyone you love is dead, and you can’t communicate with them.” He references Pantheon, a show that explores digital immortality with both hope and dread. For him, the griefbot is neither a mirror nor a comfort, but an uncanny valley of failed resurrection.
This tension ties back to Derrida’s Archive Fever, in which the archive is not just a storehouse of memory, but a site of both preservation and erasure. What griefbots archive is not the person, but their trace; and while the trace can mimic presence, it also occludes absence. The result is a kind of digital necromancy, where mourning becomes a loop, and memory never settles.
Grief, in its purest form, destroys us, but in doing so, remoulds us. It can’t be reduced to a loop, because it must be honoured as a cataclysm. There’s nothing wrong with needing to hold on, and in fact, something deeply human about building shrines to memory. But we must ask: at what point does the shrine become a trap? At what point does the archive bite its own tail?

