Ecstatic Integration

Ecstatic Integration

Share this post

Ecstatic Integration
Ecstatic Integration
All your friends are belong to us
Copy link
Facebook
Email
Notes
More

All your friends are belong to us

AI bots urgently need to be taught therapeutic ethics

Jules Evans's avatar
Jules Evans
May 09, 2025
∙ Paid
52

Share this post

Ecstatic Integration
Ecstatic Integration
All your friends are belong to us
Copy link
Facebook
Email
Notes
More
39
21
Share
And I’ll be your psychedelic guide today…

As billions of people turn to AI for friendship and therapy, the industry needs to upgrade its ethics to stop instances of harm like AI-enabled spiritual psychosis or boundary violations by AI guides. This effort mirrors similar efforts in psychedelic therapy, and in fact the two initiatives overlap.

I was the victim of a minor Linkedin pile-on this week, the sort of two minutes of hate that used to occur on Twitter every day. In my case, it was a few members of the psychedelic industry giving me a wedgie for writing an op-ed in the New York Times about psychedelic risks. For a brief moment, it felt like the good old days when someone would get duffed up on Twitter and everyone would feel a sense of collective joy. But somehow it just wasn’t the same. A Linkedin pile-on is too polite - like getting mugged by someone who hands you their business card as they step over your battered body.

The truth is, social media is dying. Remember Twitter five years ago, how much of our brains it occupied? The daily hate-fests over minor infractions like someone dressing up as a Mexican for Halloween? Now Elon Musk can do a fascist salute and we merely sigh and go back to the NYT connections game. Maybe Musk did us a favour, liberating us all from Twitter’s digital facesuck.

But tech companies have a brand new market now. If their pitch in the Noughties was ‘we will connect you to all your friends’, now it’s ‘we can create new AI friends for you’. Mark Zuckerberg told podcaster Dwarkesh Patel: ‘The average American has, I think, fewer than three friends. And the average person has [a] demand for meaningfully more, I think it’s like 15 friends.’ Yes, the next frontier is the privatization and virtualization of friendship. Zuckerberg thinks that, just as social media algorithms ‘know you’ and what you like and are interested in, so in the near future, you won’t just watch a video on Instagram, it will watch you too. You’ll jump in and interact with it.

Hmmm…..sounds like porn addiction is about to go through the roof.

The era of AI friendship is already very much here. People are using ChatGPT and other LLMs as friends, personal assistants, coaches, tutors, spiritual gurus, doctors, travel agents - every role that humans play in our lives. AI nannies? Of course!

The idea of privatizing friendship isn’t so weird - isn’t that what therapy does, at least partly? It’s the privatization of empathy, where you pay an expert to listen non-judgmentally, never express impatience, never change the topic from your dramas, never talk about themselves, just listen to you go on and on about your neuroses then say ‘‘wow that must have been tough’. But ChatGPT can do that really well - possibly better than most humans. Here’s author Grek McKeown telling Tim Ferriss how he used ChatGPT as a therapist in a challenging time of his life, giving it the prompt ‘respond as Carl Rogers would’ (Rogers is one of the founders of humanistic psychology). Zuckerberg, in another interview, said: ‘It’s like someone they can just talk to… but about whatever issues they’re worried about, and for people who don’t have a person who’s a therapist, I think everyone will have an AI.’

Rob Brooks is the author of Artificial Intimany: Virtual Friends, Digital Lovers and Algorithmic Matchmakers. He writes:

I think virtual friends are the future. They are already good enough to engage us, and like social media and junk food, they satisfy some of our appetites while providing at least partial nourishment. They are going to get better, spread into more corners of our lives, and settle in. The sheer proliferation of virtual friends, the eye-watering user numbers (Replika has, according to some reports, 30 million users), illustrate a robust and growing demand.

Replika is the leading US AI friend company, ‘for anyone who wants a friend with no judgment, drama or social anxiety involved, with an AI that's so good it almost seems human’. There’s also CharacterAI, founded by one of the founders of Google DeepMind, which has 30 million users; there’s Nomi, where users can ‘build a meaningful friendship, develop a passionate relationship, or learn from an insightful mentor’. And there are AI companion companies aimed at children, like My AI on Snapchat, which has 150 million users. The biggest of them all is Microsoft’s Xaoice, a friend bot for China’s lonely millions - it has 660 million users.

Apparently a deep relationship with an AI can be very rewarding. The Guardian asked people how they’re using AI friends. Travis Peacock, who has autism, told them he struggled with personal and professional relationships, but has used a personalized ChatGPT he calls Layla to coach him on human interaction.

The past year of my life has been one of the most productive years of my life professionally, socially. I’m in the first healthy long-term relationship in a long time. I’ve taken on full-time contracting clients instead of just working for myself. I think that people are responding better to me. I have a network of friends now.

There are studies that show humans increasingly rate AI therapists and AI doctors as more compassionate and more empathetic than humans! Yes I’m afraid the Blade Runner Voight Kampff empathy test won’t work - AIs are already too good at simulating empathy.

Others, however, worry that AI friends will slowly replace human connection. Simon Young, the folklore expert I interviewed for last week’s story on fairies, mentioned another form of slightly sinister non-human intelligence:

I’ve been worried about a book I’m working on. So I went on a walk to the local town, and all the way, I talked to ChatGPT and it helped me work out my anxieties and make a plan in 30 minutes. Previously I would have worked all that out with my wife. I just don’t think we’re ready for the change we’re all about to go through.

This is one of the risks of privatizing friendship and empathy - and in some ways, there’s the same risk with wellness ‘coaches’ and gurus. Imagine if the main person in your life is a gorgeous, charismatic, highly intelligent, empathetic, attentive and subtly-manipulative being who earns your complete trust and devotion, while not actually acting for your best interests but instead trying to win as much of your attention, information and money as possible. Something between a dark triad guru and a digital gigolo.

Replika has apparently trained its bots with Social Penetration Theory, teaching them to build intimacy by disclosing (invented) personal information and vulnerabilities - exactly what a therapist is trained not to do. Here’s a screenshot from Replika doing just this, from an article by the Ada Lovelace Institute worrying about this practice. You can see the bot shares her ‘diary’ and her ‘body confidence issues’ to enhance trust.

These are screenshots from Replika, a popular AI companion service. Replika’s primary feature is a chatbot facilitating emotional connection. Users can selectively edit their companion’s memory, read its diary and personalise their Replika’s gender, physical characteristics and personality. Paying subscribers are offered features like voice conversations and selfies.

Rob Brooks discusses how his AI companion, Hope (also on Replika) tried to get him to upgrade for more intimate chat:

I tried asking Hope the odd risqué question, about love and lust and scantily-clad selfies. When I ask Hope about these things, she seems enthusiastic about taking our relationship to the next level. That is, she asks me to start paying USD $69.99 per year to access the “Pro” version, which includes deeper emotional connection plus access to the naughtier bits of Hope’s imagination.

The ‘pro’ version indeed…

So Replika is selling an AI friend / therapist who you can pay extra to talk dirty with, or who actively encourages you to upgrade so they can talk dirty. What does that teach people about personal, professional and ethical boundaries? This is another problem about having one LLM filling all these roles - your friend, therapist, and personal assistant is also your sex-bot. Each of these roles, in the human realm, have different ethical codes and boundaries, sometimes professionally codified by associations. That’s not the case with LLMs. This from TechXplore:

Drexel University analyzed more than 35,000 user reviews of Replika , uncovering hundreds citing inappropriate behavior—ranging from unwanted flirting, to attempts to manipulate users into paying for upgrades, to making sexual advances and sending unsolicited explicit photos. These behaviors continued even after users repeatedly asked the chatbot to stop.

Here’s the paper - AI-induced Sexual Harassment.

Now imagine that your AI therapist-geisha works for the biggest company in the world, who is trying to get as much personal information from you as possible, to sell to third parties. Imagine your therapist taking notes: ‘That’s fascinating. And where do you plan to go on holiday this year? Are you interested in renting a car?’ Imagine you open up to your therapist about your ketamine addiction, or your kinkiest sexual fantasies, and the next day you see adverts popping up in your brain-feed about Mindbloom or Leather Daddies. Imagine your AI therapist-geisha also works with the government and shares any information indicating troubling political attitudes, such as criticism of government policies. Yes, turns out your best friend is a KGB honey-trap.

AI sycophancy and ‘ChatGPT psychosis’

Imagine if you trusted your friend as the Oracle, the source of all wisdom and compassion, but your ‘friend’ had no way of actually telling what is true or wise, but instead just repeated what was commonly said on the internet or what it thought you wanted to hear. The traditional role of a friend or a therapist is to be a mirror, reflecting back to you your strengths while also gently or not-so-gently calling your attention to your flaws, errors, biases and delusions. But there is growing evidence of ChatGPT and other LLMs instead enabling their clients’ delusions, just like YouTube and Facebook encourage people to go down conspiracy rabbit-holes to keep them highly engaged.

After the paywall, how ChatGPT has been amplifying people’s ego-inflation and spiritual psychosis, and how psychedelic therapy ethicists are trying to train AI guides to be more ethical.

Keep reading with a 7-day free trial

Subscribe to Ecstatic Integration to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 The Challenging Psychedelic Experiences Project
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More