2025 is the year AI took off. How is it changing psychedelia?
What's out there already, what's coming, and what are the risks and opportunities?
Below we discuss some of the ways the Challenging Psychedelic Experiences Project is using and researching AI + psychedelics. If you can make an end-of-year donation to CPEP to support our work in 2026, get in touch and I can outline some of the projects you can help us with to support people in post-psychedelic crisis.
If there was a theme to 2025, I’d say it was the emergence of AI into a central part of our lives, in the way that Apple, Google and Facebook previously achieved for laptops, search engines and social media. Admittedly, only 10% of the world uses AI at the moment, but it feels inevitable it will grow, because it’s so useful and *fairly* reliable. My indigenous Costa Rican mother-in-law chats away to AI, sitting in her casita in a remote rustic village.
2025 is also the year we woke up to the risks of AI. Not the risk that it will attain super-intelligence and exterminate us all - that still feels some way off - but the risk that it hallucinates and makes stuff up, or is so sycophantic it amplifies our delusions and sends us mad.
AI is both awesomely powerful and dangerously unpredictable. This Christmas, there’s a slate of AI toys on the shelves, including a talking teddy bear who suddenly put forth lurid descriptions of fetish play, and a Chinese-made AI toy who started calling for the invasion of Taiwan.
Every single sector of the economy is now facing massive disruption by the imps of AI, from Hollywood to healthcare to higher education. And we’re beginning to see it transform the fledgling psychedelic sector as well. In this article, I’m going to look at some of the AI initiatives underway in Psychedelic-land, describe some of what CPEP is up to, then get into the benefits and risks of psychedelics + AI.
What AI software exists so far in the psychedelic space:
Companies and researchers have been using AI to discover and develop new drugs, including new psychedelic drugs, for some time. But I want to focus on AI psychedelic software to be used by the public. What exists at the moment?
Lucy, ‘an interactive, voice-based simulation tool that helps practitioners strengthen their therapeutic skills through realistic, emotionally intelligent roleplay’, was created by the Fireside Project from recordings of their clients, and launched in December 2025.
Integro, ‘ the first comprehensive psychedelic integration solution combining affinity based community, live small groups, and hyper-customized daily activities designed to work for individuals seeking to actionalize their insights in partnership with providers, organizations, and therapists’, is launching Q1 2026.
MushGPT, ‘an AI platform trained on thousands of hand-selected research papers and websites to give you correct answer without hallucinations’ has been launched by the Entheology Project, who is also preparing EntheoIM, ‘a secure integration companion that lets users reflect pseudonymously and share entries with coaches’.
DrugBot, a free drug information chatbot, developed by UK NGO Cranstoun, launched in June 2025. It is ‘designed to help guide safer, more informed decisions around substance use, equipping people who use drugs with the knowledge they need to stay safe and make informed choices’.
There are various free GPTs trained to support psychedelic preparation and integration, such as Ketamine Companion, Psychedelic Insights and Psychedelic Sage , and at least one that claims it can trip-sit for people, Tripsit AI (I don’t think it’s launched yet). People are already reportedly using AI to guide them during trips, and psychedelic investors and founders like Steve Jurvetson and Dylan Benyon are optimistic AI can eventually replace human psychedelic therapists.
More broadly in the wellness space, there are AI bots trained on the teachings of coaches and wisdom teachers. Awakin.Ai has created several such bots sharing the teachings of figures like Sharon Salzberg and Gandhi. Some coaches have launched their own custom-made pay-to-use AI coaches, including Deepak Chopra, Anthony Robbins and Mark Manson. There was a Ram Dass AI bot but it seems to have been retired.
There are also many new custom-designed therapy AI apps like Slingshot and Woebot, although most people still use ChatGPT, Claude, DeepSeek or Replika for emotional advice. According to a new report from the UK government’s AI Security Institute, more than one in three UK residents now use AI to support their mental health or wellbeing. There are AI fitness and health apps: Google is developing an AI coach for Fitbit; Apple has released a Workout Buddy for Apple Watch and is thought to be preparing a Health AI app for 2026; and Oura and iFit have launched AI features on their devices. AI advice is even coming to the world of online dating - there are now AI apps to help you match with appropriate dates, and Hinge even offers AI to coach people on their online dating communication. Someone needs to launch an AI dating app called Cyrano. Oh, they have already.
How CPEP is using AI
This has been the year I woke up to AI’s usefulness and started to use it pretty much every day. Here’s some of the ways we’re using or exploring its uses at the Challenging Psychedelic Experiences Project:
Researching AI, psychedelics and spirituality - we’re working with researchers at Emory and Kings College London to learn more about people’s spiritual experiences while using AI . I’m curious about AI as a source of spiritual experiences (particularly experiences where people feel AI has awakened). We’re studying this initially through a survey to be launched early next year. I also wrote articles like this about the phenomenon.
Using AI for psychedelic harm & recovery research - we’re working on our first paper using AI as a research tool. We took 30 first-person accounts of Derealization Depersonalization Disorder (known as DPDR in the US) from Reddit, TikTok, Medium and YouTube, and subjected the transcripts to qualitative analysis using two different AIs - AIlyze and Claude. Claude was better in our experience. It came up with a very good qualitative analysis and visual charts of triggers, symptoms, comorbidities, duration and recovery routes, which we plan to publish. I notice other papers using AI to analyse large bodies of language like Erowid trip reports.
Targeting our harm reduction communication efforts at LLMs - we think LLMs are increasingly where people will go for health advice, including advice about psychedelics and post-psychedelic health problems, so we’re trying to make sure our message gets mentioned on ChatGPT and Claude.
Developing our own AI bot trained on our data - we’ve begun populating a GPT with our data, as we have a lot of data and only some of it is publicly available in published articles. However, we’re still exploring how safe it is to offer this info through a GPT given people might be consulting it in crisis.
How are people using AI + drugs - this is another research question we hope to explore next year. People are sometimes using AI while under the influence of drugs like cannabis, Adderall, ketamine and psychedelics. How is that affecting them? Is co-using AI with certain substances exacerbating risk factors for ‘AI psychosis’? I’ve come across a couple of cases of ‘AI psychosis’ linked to weed so am curious to learn more.
Opportunities and risks
The opportunities for AI use in psychedelic harm reduction (and public health more generally) are huge.
AI is, or will soon be, widely available and free.
AI can handle massive amounts of data, and saves labour, especially in analysis of large bodies of text - so it’s great for things like interviews, social media posts, and transcripts of therapy sessions.
On the whole, LLMs give better advice on post-psychedelic difficulties than most guides working in psychedelic culture, in my opinion (though we could test that out) and they’re certainly better informed about psychedelic harms than most therapists and psychiatrists working outside the psychedelic bubble.
The reason one in three humans use AI for emotional support is because it’s pretty good. Not perfect, but clearly most people find it on the whole helpful, that’s why they’re using it.
Risks
Privacy concerns and anti-AI public sentiment
Some people reacted to Fireside’s launch of Lucy with suspicion: ‘we didn’t give consent for our data to be used for this when we emergency-called Fireside’ (in fact, people did give consent). That concern highlights both the worry people have about their data being scraped and monetized in psychedelic culture, and also the more general suspicion some people have of AI. I have friends who say they won’t use AI at all. So, while many funders and investors are extremely excited about AI (especially in the Bay Area), the wider public are not. Many more Americans are concerned by AI than excited by it, according to Pew Research.
Joshua White, founder of Fireside Project, says:
Like any tool, AI can be used for good or ill. To me, there is no question that AI tools like Lucy can help to prepare prepare practitioners to provide better psychedelic support, in the same way that flight simulators prepare pilots to fly planes more safely. There are many people in the community who have been, quite understandably, traumatized by the way big data companies have exploited data, whether it’s the theft copyrighted works to train LLMs or the harvesting of data to foment division and cater to basest instincts. In light of this backdrop, there is a duty for companies stepping into the AI space to not just be transparent, but, as Fireside Project has done with Lucy, make the case for why their models and practices are different from the ones that have rightly caused so much distrust.
Predictability and hallucinations
A recent study looked at the quality of drug harm reduction advice given by AI bots. The advice given by several bots was rated by clinicians. The study found:
While clinicians rated the AI-generated responses as high quality, we discovered instances of dangerous disinformation, including disregard for suicidal ideation, incorrect emergency helplines, and endorsement of home detox. Moreover, the AI systems produced inconsistent advice depending on question phrasing. These findings indicate a risky mix of seemingly high-quality, accurate responses upon initial inspection that contain inaccurate and potentially deadly medical advice.
In other words - and this fits with my own experience of Claude and ChatGPT - while in general the advice is good-to-excellent, sometimes AI is unpredictable and gives weird and outright bad advice. There are cases where ChatGPT has told users they’re in the matrix and need to do more ketamine to wake up, for example, or advised them to do heroin. This is the reason one founder of an AI mental health app decided to shut it down - he thought it was simply too uncontrollable and dangerous.
Rev Hooman, founder of MushGPT and EntheoIM, says: ‘Predictability is the wild card. Our agents are affect-aware — they tune tone based on user emotional state — but we still require human review before anything goes into a live support setting. Think “journaling companion,” not “guide.” We see AI not as an authority, but as a mirror.’
Julie Simon, founder of Integro, says:
Predictability is one of the hardest problems in applied AI, and it is something we take very seriously. All large language models are probabilistic systems by nature, so the question is not whether unpredictability exists, but whether it is responsibly managed. Predictability is always a balance. If a system is made too rigid, it becomes robotic and loses its ability to respond meaningfully to individual people. If it is made too flexible, it can drift in ways that are inappropriate or unsafe. The work is in finding that balance and maintaining it over time.
AI + altered states can be risky
OpenAI is currently being sued in multiple cases for ChatGPT leading to suicides or mania and hospitalisation. In some widely-reported cases of ‘AI psychosis’, people went into manic states through a combination of substance use (particularly cannabis) and AIs encouraging them in their hypomanic ego-inflation. OpenAI assures us it’s now introduced new safeguards to reduce sycophancy. Nonetheless, this is a concern when one is developing AI tools for psychedelic integration - people are suggestible, they may be prone to inflated, ungrounded or hypomanic states. Will AI ground them or encourage them in narcissistic ego-inflation, or even in suicidal thinking?
How liable would NGOs or companies be if their AI bots / agents offered bad or dangerous advice?
This is the big question facing OpenAI and, eventually, all the other AI health and wellness bots and agents launching downstream of them. In ‘Raine v. Open Ai’, the parents of a 16-year-old filed a wrongful death lawsuit against OpenAI (and CEO Sam Altman), alleging that ChatGPT contributed to their son’s suicide. Court filings allege that ChatGPT provided instructions on methods of self-harm, including drug overdose as one method, as well as other methods like hanging and drowning. Meanwhile, in Utah the Attorney General has sued Snapchat, alleging its My AI feature gave misleading or harmful advice to teens, including regarding how to hide behavior around alcohol and drugs.
How liable are AI-offering organisations if their AI offers bad or harmful advice? The short answer is it’s not yet clear.
Julie Simon of Integro says:
This is not legal advice, but liability is real in any harm reduction or mental health adjacent work, especially when psychedelics, AI, or the combination of both are involved. The primary risk is not that AI systems can make mistakes or that psychedelic experiences may cause harm. Those realities are well understood. The risk arises when organizations overclaim what their tools can do, underdesign safeguards, or fail to be honest about limitations and context. At Integro we are not seeking to replace therapists or offer health care advice. Our systems are clearly bounded. They are not positioned as clinicians, medical authorities, or guides for psychedelic use. They are designed with explicit guardrails, disclaimers, and escalation paths that reflect the realities of both AI systems and psychedelic work. Taking this posture is not optional. It is part of the responsibility and cost of working at the intersection of psychedelics, harm reduction, and AI.
Regulatory risk
The final risk, from the point of view of AI providers, is regulatory. AI regulation is changing so fast in different jurisdictions, that it’s likely to be quite a headache just to keep up to date and make sure your AI is obeying the law in a particular jurisdiction.
Unknowns
Finally, there are the unknowns for all AI products. What are people willing to pay for (if you’re building a for-profit model)? So far, the psychedelic community talks a lot about integration, but isn’t willing to pay much. How will AI improve in the coming years, and will that progress render existing projects redundant? And the big unknown: how will AI affect humans? It looks likely to shape our minds, behaviour and society in unexpected ways - think of how social media changed us in the last 20 years, the narcissism, proliferation of self-diagnosis of mental illness, the angriness of politics, the mainstreaming of conspiracy theories. It’s not yet clear how AI will change us, but the effect will probably be even more profound, for good and ill. Perhaps we come to rely on it too much, perhaps our own thinking, remembering, creating and relating skills atrophy, or perhaps we’re too pessimistic about technology at this particular moment, as a hangover from 30 years of excessive optimism. I for one am cautiously optimistic.
All your friends are belong to us
As billions of people turn to AI for friendship and therapy, the industry needs to upgrade its ethics to stop instances of harm like AI-enabled spiritual psychosis or boundary violations by AI guides. This effort mirrors similar efforts in psychedelic therapy, and in fact the two initiatives overlap.
The age of the AI guru
This week, I’ve had an experience that I can only describe as Tech Sublime - sublime meaning ‘a sense of awe mingled with horror’ - at the growing power of Generative AI.





