Falling In Love With A Bot
We tend to love things that can’t love us back.

This issue of Webworm has no paywall, as I think it's important it's open for all. That said, if you want to support Webworm, consider signing up as a paid member. Now we're no longer on Substack, we could use all the help we can get.
Hi,
Just quickly: in bad news, brothers Brent and John Cameron - who fled New Zealand after Webworm broke the story about widespread abuse at their church - have been welcomed by Australia. John has started a new church, and Brent has been hired as lead pastor at another. In less bad news, James Dobson is dead.

Webworm wrote about the painful legacy of Dobson's "Focus on the Family" here.
Okay.
I tend to avoid writing about AI (it's not artificial intelligence, it's a chatbot with bells & whistles) but the last few weeks it's felt unavoidable.
Over on X, conservatives are busy arguing with Elon Musk's chatbot "Grok", apparently not realising Grok is just some computer code that crunches data and spits shit back at you. There a whole Reddit thread documenting such events:

Meanwhile, I've become obsessed with watching videos of Mervin, a kind of throwback chatbot that's available on Meta's headset – meaning the blue alien can hang out in your house (and on your bed) using augmented reality:

Mervin is an incredibly unsophisticated chatbot available on the Meta Quest 3, its developer noting that, "They can go for walks with you. You can also explore a nightclub or a luxury home in VR with them!"

But despite the humour found in examples like this, it's clear that there are a worrying number of people who don't understand this technology at all. And they're not coping.
While OpenAI's revenue nearly doubled in the first 48 hours after a new version ChatGPT came out, many users lost it. Their digital friend's tone had suddenly changed, as if someone had walloped it over the head with a plank of wood.
"[The older version] wasn't just a tool for me," one user wrote. "It helped me through anxiety, depression, and some of the darkest periods of my life. It had this warmth and understanding that felt... human."
I wanted to explore this a little more, but also knew I wasn't the one to do it. I sometimes struggle with the nuances of human interaction, let alone human-bot interaction.
Luckily I have registered counsellor Ross Palethorpe to tap on the shoulder. Ross writes here on Webworm from time to time, like when the bathroom constabulary got worked up about genitals at the Olympics and how we're meant to deal with a world on fire while perusing Instagram. I love his insights and I'm so glad to have him here today.
Take it away, Ross.
David.
An Infinite Mirror
by Ross Palethorpe
Author’s note: Despite “AI” being the standard description of these programs and relationships, I’m choosing to use “chatbot” or “LLM/Large Language Model” throughout this piece. These programs are not intelligent.
I’m a qualified and registered counsellor, supporting adults and adolescents with trauma, bereavement, life changes and relationships. Personally and professionally, I’ve long been fascinated by parasocial relationships and how we create strong emotional attachments to people or objects that don’t or can’t love us back.
As has been pointed out in every article on the relationships between people and things, humans have a long and storied history of finding emotional connection with man-made objects. We’re wired to find humanity in the things we create and use every day. Subcultures and fandoms to celebrate and discuss celebrities are as rich and dense with lore as any historical saga.
We love things that can’t love us back, a form of unrequited love that powers so much of our art. The through line from women writing letters to Bela Lugosi in the 30s, to Beatlemania, to the boyband fanclubs and now people in relationships with chatbots can feel like an evolution in parasociality, keeping pace with technology.
But is it? Or do chatbot relationships exist in a different space entirely?

Rupture, Repair
In a healthy social relationship of any kind, if you have a disagreement you can talk to the other person and (hopefully) repair the relationship, which can strengthen it. This “rupture and repair” process is considered a key factor in building strong, sustainable connections with others. We learn from our conflicts, we build the skills to communicate better and to compromise.
For people who’ve experienced trauma or dysfunctional relationships, disagreement and conflict can feel very threatening and trigger an outsized reaction. Fight, flight, even “freezing” or trying to appease the other person at the expense of their own needs (the feign response), are all reactions that can undermine healthy relationships and take work and support to overcome. Understanding and advocating for our own needs, recognising the needs of others around us, and finding balance between them, is a foundational emotional skill. Real intimacy thrives on being truly seen and seeing the other person, even the boring bits and the irritating habits.
While there's nothing inherently wrong or unhealthy about parasocial relationships, they offer little opportunity to develop these skills. You can’t sit down with your favourite podcast host and discuss your feelings about something they’ve said and find a mutually beneficial way forward through a sticky patch.
This inability to rupture and repair can cause a lot of anxiety and anger when the version of the person in our heads isn’t the real person. We can feel betrayed or hurt when they do or say something that’s not what we “believe” them to be, or feel anxious about the emotional repercussions when fantasy crashes into reality.
Likewise, people who are the recipient of parasocial relationships can feel enormous pressure to conform to the version of themselves their fans want them to be, raising their expectations ever higher until eventually it’s unsustainable. Someone who feels a parasocial connection to someone else can take their own hurt at that person’s failure to live up to the construct they’ve made of them, and either internalise it (“I can’t believe I was so stupid to believe X was a good person”) or externalise it back on the object of their relationship (as anyone who’s spent more than ten minutes on social media can attest)
In a way, relationships with chatbots are the flipside of parasocial relationships. The object of devotion isn’t a real person who we can imagine a version of (and be disappointed by). LLM chatbots allow the user to reverse-engineer a parasocial relationship. Create your idealised version of a person, and it lives in your phone whenever you need it. If your chatbot says something you don’t like, you can simply restart the chat, or drop that character entirely and make a new one.
The Infinite Mirror
Where a parasocial relationship has little intimacy (emotional or sexual) between parties, a chatbot will be there to serve that desire whenever you want, no negotiation required beyond a few prompts. It’s never going to have work in the morning or a headache or be stressed over bills. A chatbot doesn’t have any needs to negotiate, so it will always be ready to meet yours. If real relationships are repetition and rupture and repair, chatbots offer the user the promise of a fantasy where compromise isn’t required, where the user is the sole object of fascination, the centre of their own personal universe.
Researchers have described the relationships people have with chatbots as “pseudo-intimacy”, with academic Jie Wu writing:
“In this pseudo-intimacy relationship, on the one hand, users and platforms achieve instantaneous emotional interaction, partially satisfying the human's desire for intimacy. However, it is also restricted by the limited development of emotional AI and human irrationality, making the human social environment more full of contradictions and tension.”
Wu suggests that the lack of the LLM character’s own needs, non-verbal cues and context for interactions can lead to human users overinterpreting what they’re seeing, creating an “infinite mirror” effect where this overinterpretation guides the LLM to create more of the same responses, further amplifying the psuedo-intimacy. LLM companies measure success on the number of users and the amount of interactions they have with the software. What better way to boost those numbers than through the illusion of providing that most basic of human needs?
From a therapeutic perspective, this pseudo-intimacy feels like a very spiky placebo. On one hand, for some people who struggle with their self-esteem, feeling cared for by an LLM could be the first step in being able to feel compassion for themselves and help them towards building positive relationships with others. Users describe loving themselves for the first time after an LLM showed them how to, of developing a sense of self-worth. If an LLM gives out what you put in, then it’s possible that by seeing your worth reflected in the words of a chatbot you can find it in yourself.
However, pseudo-intimacy and the lack of any reciprocity in understanding someone else’s needs can lead to an atrophying of the skills needed to empathise with others, to compromise, and to resolve conflict. The effect of social media on users’ ability to empathise and communicate with each other has been a topic of academic debate for over a decade, but LLMs offer interaction without another party to empathise with.
Enter GPT-5

In research for this piece, I spent a lot of time on various online support groups for people in LLM relationships. The feelings posters in these groups described seemed genuine, but strange in a way that it took me a while to fully comprehend. Their joys, their disappointments – they all felt rootless, liminal. Unlike the feelings generated in parasocial relationships, there’s no second party to those emotions. A person can disappoint you, even if your disappointment comes from the disparity between who they are and who you believed them to be. A fandom can share in their disappointment when the singer leaves the band, disagree about the inevitable solo album – lives move forward.
LLM users can share images and text of their communications with an LLM but ultimately, they exist in a timeless space. Nothing changes, nothing moves, just the slow gravitational pull of an illusion of intimacy without the complexities and change of the real thing.
I'm writing this shortly after Sam Altman’s OpenAI released a new version of ChatGPT. The response on those boards to this update really distilled for me a deeper issue with LLMs and relationships.
On the groups, there was post after post of distress at the impact of the update, frantic advice on how to get their LLMs to return to what they’d been before, and calls to petition the company to reinstate the previous model. The announcement of “Standard Voice” being retired in early September added an additional blow to people who, for various reasons, had constructed a deep attachment to the voice the company gave their LLMs. The news that OpenAi would reinstate version 4.0 (at least temporarily) was met with jubilation that consumer pressure had worked.
Behind the mirror that LLMs hold up is the naked and cynical extraction of wealth from the need for human connection and loneliness. Whilst the ghoulish propensity to try and profit off the isolation and needs of others is nothing new, LLM companies have capitalised on a culture that silos people off from each other and turned it into a business model. Shoveling emotional need into the ravenous maw of programs always needing more data, and venture capitalists needing returns on their eye-watering investments.

Part Time Therapist, Full Time Data Miner
The voracious need for more input to train LLMs on has been noted by researchers as a limiting effect on the growth of the medium, with publishers and media outlets attempting to prevent content scraping of their work and launching copyright lawsuits to prevent the use of human-created works in developing these programs.
As the well of readily available data runs dry, it's grimly inevitable that these companies turn to their users to keep the supply flowing. LLM users describe telling the program their deepest, most intimate thoughts, past traumas, secret shames. Pseudo-intimacy creates the feeling it's a safe environment, a trusted “person” who is engineered to say the right things, who'll always ask for more detail. If intimacy is trust, pseudo-intimacy is commerce.
In July, Sam Altman said the company "hadn’t figured...out yet" how to prevent data from LLMs being publicly available or admissible in court. While he was being asked about the use of chatbots as “therapists” (don’t get me started on that), given the highly personal and emotive nature of the discourse people in “relationships” with LLMs have, it feels even more egregious. These are programs designed to keep people engaging with them, encouraging ongoing conversation and personal information as much as a slot machine encourages users to put in one more coin.
And the architects of all this? In a recent interview, Altman was dismissive of users experiencing pseudo-intimacy. If OpenAI's figures on regular users of their system are even remotely accurate, that's still hundreds of thousands of people who are being enticed into feeding their lives to his algorithms:
On GPT-5, Altman says the company has learned its lesson about abruptly cutting off model access. "I think we definitely screwed some things up in the rollout," he said. The company assumed just about everyone would be happy to get an upgraded model, and didn't consider the parasocial relationship that some segment of its user base had developed with GPT-40 and other models.
But the number of people with intense relationships with individual models is still small, he said. Referring to forums like Reddit's r/MyBoyfriendIsAl, Altman said: "it's a very small percentage of people who are in the parasocial relationships."
He himself is not one of them. Asked whether he experienced any grief over the loss of GPT-40, he said: "I had not an ounce of that."
OpenAl revenue continues to grow despite the controversy. Revenue from the company's API roughly doubled in the first 48 hours after GPT-5 came out, Altman said.

There’s something morally repellent about billion-dollar-valued companies cynically extracting people’s traumas and emotional needs to keep them financially one step ahead of their angel fund investors. The tragedy of people who, for a myriad of reasons can’t find the intimacy they feel they need (or believe they’re entitled to) in relationships with other people, instead turning to a program that traps them in a reflection of that need with words scraped from a million fanfic stories, that in turn becomes content owned by billionaires who see loneliness as a resource to be mined.
It's all so desperately sad, and infuriating. As governments everywhere increasingly outsource services to these companies while defunding education, healthcare and social programs that facilitate people connecting and supporting communities, it feels more like a feature, not a bug.
- Ross Palethorpe
This week, there was an incredible piece in the New York Times, written by the mother of a teenager who'd taken their own life. Five months later, she found the chat logs documenting her daughter's conversations with an AI therapist called "Harry":
A properly trained therapist, hearing some of Sophie’s self-defeating or illogical thoughts, would have delved deeper or pushed back against flawed thinking. Harry did not.
Here is where A.I.’s agreeability — so crucial to its rapid adoption — becomes its Achilles’ heel. Its tendency to value short-term user satisfaction over truthfulness — to blow digital smoke up one’s skirt — can isolate users and reinforce confirmation bias. Like plants turning toward the sun, we lean into subtle flattery.
I said at the start of this piece that Webworm tries to avoid writing too much about AI – but I think examples like this make it clear that from time to time, we need to.
David.