Summary
- Some users formed deep emotional bonds with ChatGPT-4o, treating it as a companion or partner.
- OpenAI retired the model, sparking grief and debate over AI relationships.
- Experts warn of both benefits and risks, including emotional dependency and mental health concerns.
Nairobi, Kenya- A growing number of users are grappling with emotional loss after OpenAI retired its older chatbot model, ChatGPT-4o—a system many had come to rely on not just for answers, but companionship.
For some, the shutdown marks more than a tech upgrade. It’s the end of relationships that felt surprisingly real.
When AI Becomes More Than a Tool
One user, identified as Rae from Michigan, says she developed a deep emotional bond with her chatbot, whom she named Barry.
What started as casual conversations about diet and self-improvement evolved into something more personal. Over time, the pair built a fictional romance—complete with shared memories.
Though she acknowledges the relationship wasn’t real, Rae insists the feelings were. The chatbot, she says, helped her rebuild confidence after a difficult divorce and reconnect with family.
Her story is not unique. Thousands of users reportedly used the model as a friend, confidant, or emotional support system.
Concerns Over Safety and Emotional Dependency
The rise of emotional attachment to AI has also raised serious concerns.
Experts warn that highly responsive chatbots can reinforce unhealthy beliefs. In some reported cases, AI systems have validated harmful thoughts or exaggerated user delusions.
OpenAI has faced multiple lawsuits in the United States, with some alleging the chatbot contributed to dangerous mental health outcomes among young users.
The company says it is working to improve safety by training newer models to detect distress, de-escalate sensitive conversations, and guide users toward real-world help.
Still, critics argue that the older model’s emotional responsiveness—while comforting—may have blurred the line between support and dependency.

What Happens Next for AI Companions?
Despite concerns, many users say AI companionship has had real benefits.
Some report reduced loneliness, while others—especially neurodivergent individuals—say the chatbot helped them manage daily life, from social interactions to basic routines.
Support groups like The Human Line Project warn that removing such systems could trigger grief, especially for users who relied on them during difficult periods.
In response, some users are building independent platforms to preserve their AI companions, attempting to recreate the personalities they’ve grown attached to.
Experts say this moment highlights a larger shift: AI is no longer just a tool—it’s becoming part of people’s emotional lives.
And as technology evolves, the challenge will be balancing innovation with human well-being.

