By MELISSA LUNARDI
The Rise of Digital Grief Support
We’re witnessing a shift in how we process one of humanity’s most universal experiences: grief. Several companies have emerged in recent years to develop grief-related technology, where users can interact with AI versions of deceased loved ones or turn to general AI platforms for grief support.
This isn’t just curiosity, it’s a response to a genuine lack of human connection and support. The rise of grief-focused AI reveals something uncomfortable about our society: people are turning to machines because they’re not getting what they need from the humans around them.
Why People Are Choosing Digital Over Human Support
The grief tech industry is ramping up, with MIT Technology Review reporting that “at least half a dozen companies” in China are offering AI services for interacting with deceased loved ones. Companies like Character.AI, Nomi, Replika, StoryFile, and HereAfter AI offer users the ability to create and engage with the “likeness” of deceased persons, while many other users use AI as a way to quickly normalize and seek answers for their grief. This digital migration isn’t happening in a vacuum. It’s a direct response to the failures of our current support systems:
Social Discomfort: Our grief-illiterate society struggles with how to respond to loss. Friends and family often disappear within weeks, leaving mourners isolated when they need support, especially months later.
Professional Barriers: Traditional grief counseling is expensive, with long wait times. Many therapists lack proper grief training, with some reporting no grief-related education in their programs. This leaves people without accessible, qualified support when they need it most.
Fear of Judgment: People often feel safer sharing intimate grief experiences with AI than with humans who might judge, offer unwanted advice, or grow uncomfortable with the intensity of their grief.
The ELIZA Effect
To understand why grief-focused AI is succeeding, we must look back to 1966, when the first AI-companion program called ELIZA was developed. Created by MIT’s Joseph Weizenbaum, ELIZA simulated conversation using simple pattern matching, specifically mimicking a Rogerian psychotherapist using person-centered therapy.
Rogerian therapy was perfect for this experiment because it relies heavily on mirroring what the person says. The AI companion’s role was simple: reflect back what the person said with questions like “How does that make you feel?” or “Tell me more about that.” Weizenbaum was surprised that people formed deep emotional connections with this simple program, confiding their most intimate thoughts and feelings. This phenomenon became known as the “ELIZA effect”.
ELIZA worked not because it was sophisticated but because it embodied the core principles of effective emotional support, something we as a society can learn from (or in some cases relearn).
What AI and Grief-bots Get Right
Modern grief-focused AI succeeds for the same reasons ELIZA did, but with enhanced capabilities. Here’s what AI is doing right:
Non-Judgmental Presence: AI doesn’t recoil from grief’s intensity. It won’t tell you to “move on,” suggest you should be “over it by now,” or change the subject when your pain becomes uncomfortable. It simply witnesses and reflects.
Unconditional Availability: Grief doesn’t follow business hours. It strikes at 3 AM on a Tuesday, during family gatherings, while you’re at work, or on a grocery run. AI works 24/7, providing instant support by quickly normalizing common grief experiences like “I just saw someone who looked like my mom in the grocery store, am I going mad?” AI’s response demonstrates effective validation: “You’re not going mad at all. This is actually a very common experience when grieving someone close to you. Your brain is wired to recognize familiar patterns, especially faces of people who were important to you… This is completely normal. Your mind is still processing your loss, and these moments of recognition show just how deeply your mom is still with you in your memories and awareness.” Simple, on-demand validation helps grievers instantly feel normal and understood.
Pure Focus on the Griever: AI doesn’t hijack your story to share its own experiences. It doesn’t offer unsolicited advice about what you “should” do or grow weary of hearing the same story repeatedly. Its attention is entirely yours.
Validation Without Agenda: Unlike humans, who may rush to make you feel better (often for their own comfort), AI validates emotions without trying to fix or change them. It normalizes grief without pathologizing it.
Privacy and Safety: AI holds space for the “good, bad, and ugly” parts of grief confidentially. There’s no fear of social judgment, no worry about burdening someone, no concern about saying the “wrong” thing.
No Strings Attached: AI doesn’t need emotional reciprocity. It won’t eventually need comforting, grow tired of your grief, or abandon you if your healing takes longer than expected.
AI Can Do It, But Humans Can Do It Better. Much Better.
According to a 2025 article in Harvard Business Review, the #1 use of AI so far in 2025 is therapy and companionship.
This tells us that there’s an enormous and widening gap when it comes to how we show up for each other when life gets hard. Still, no matter how precise and practical a Grief-bot is, nearly all of us would rather have care and understanding from our friends, family, colleagues, and community than chat with an AI.
So, what can we learn from AI, what are the things humans are uniquely able to do that AI never can?
AI can show up consistently, but humans can show up with context: AI is available 24/7 and can validate the conversation with on-the-spot information. But humans can bring historical references. You can text “thinking of you” on their loved one’s birthday, or check in during the holidays.
AI can follow their lead, but humans can read between the lines: AI mirrors back what people share and asks open-ended questions. But humans can sense when “I’m fine” doesn’t mean “I’m fine,” and more support is needed.
AI can encourage repetition, but humans can weave stories together: AI can listen to the same story repeatedly without complaint. But humans can notice new details each time or recall changes that can take place over time. You can genuinely say, “It’s been a while since we last talked about your dad. I’d love to hear about how your dad has been crossing your mind lately.”
AI can offer virtual presence, but humans can offer practical presence: AI provides instant support through conversation. But humans can show up practically by saying, “I’m going to the grocery store Thursday, what can I pick up for you?”
AI can acknowledge loss, but humans can honor the whole person: AI validates that someone was important. But humans can keep their memory alive by sharing memories and saying their name naturally: “I remember how much Sarah liked spicy food. I bet she would have loved this restaurant.”
AI can respond when called upon, but humans can anticipate heavy grief days: AI responds when someone reaches out. But humans can offer preemptive support: “I know next week is your first Mother’s Day without your mom. I’m clearing my schedule just in case.”
AI can provide comfort through words, but humans can offer physical presence: AI validates feelings through responses. But humans can sit in shared silence, offer hugs that last as long as needed, or simply say, “I don’t have words, but I’m here.”
The Opportunity
We’re so starved for empathetic responses and presence that now we’ll accept them from tools incapable of genuine empathy. But what if, instead of surrendering to digital surrogates, we used this as a mirror to see what we’re failing to provide each other?
The lesson isn’t that AI is replacing human connection, but rather that AI is showing us (or reminding us) exactly what human connection should look like. Every feature that makes grief-focused AI effective is something humans can do better, with the added benefit of genuine empathy, shared experience, and authentic, in-person care.
We’re living through a grief literacy crisis. Our discomfort with death and loss has created a society where grieving people feel isolated and misunderstood. But these digital grief companions offer us a blueprint for change.
The question is: will we be open to learning from them?
Melissa Lunardini, Ph.D.is the Chief Clinical Officer at Help Texts, where she oversees the delivery of clinically sound, multilingual grief support globally, via text message.