Peter Thiel and the end of homo sapiens
The future of humanity is being determined by a small group who think that humanity is finished
This article is free, but you should consider getting a paid subscription. Why? You get a free copy of Breaking Open: Finding a Way Through Spiritual Emergency, which Daniel Ingram says is the best book he’s read on spiritual emergencies. You also get access to our archive with over 200 articles in it chronicling the dark corners of the psychedelic renaissance, plus our video archive with 17 hours of psychedelic safety seminars featuring leading researchers and practitioners in the field. If you become a Founding Member, you you also get a copy of Philosophy for Life and Other Dangerous Situations and you get invited to monthly Founders Club meetings where we discuss what’s going on behind the scenes. All that - and for the month of July you can get a 20%-reduced subscription fee forever.
This week, an interview with dark lord of the tech right, Peter Thiel, once again launched a thousand memes. Thiel is meme-able precisely because he doesn’t speak in soundbites but instead sort of gasps like a fish in a rowing boat as he tries to translate his galaxy brain into human words.
Someone previously made an hour-long music video just of him mumbling on the Joe Rogan podcast.
Now there are various memes of him from the latest Ross Douthat interview, looking stoned while a wave of technological acceleration crashes around him.
Two moments from the interview went viral. One was him being asked - if he is so worried about the Totalitarian Anti-Christ, is he not worried he is investing in the technologies that Anti-Christ would use - AI, mass surveillance etc? That provoked a classic Thielian fish-gasp.
The other is when Ross Douthat asks him whether, as a transhumanist, he thinks it’s a bad thing if humans were replaced by machines.
Douthat: It seems very clear to me that a number of people deeply involved in artificial intelligence see it as a mechanism for transhumanism—for transcendence of our mortal flesh—and either some kind of creation of a successor species or some kind of merger of mind and machine.
Do you think that’s all irrelevant fantasy? Or do you think it’s just hype? Do you think people are raising money by pretending that we’re going to build a machine god? Is it hype? Is it delusion? Is it something you worry about?
Thiel: Um, yeah.
Douthat: I think you would prefer the human race to endure, right?
Thiel: Uh——
Douthat: You’re hesitating.
Thiel: Well, I don’t know. I would—I would——
Douthat: This is a long hesitation!
Thiel: There’s so many questions implicit in this.
Douthat: Should the human race survive?
Thiel: Yes.
Douthat: OK.
Thiel: But I also would like us to radically solve these problems.
He explains, after some gasping, that he yearns to transcend humanity and thinks technology could be one of the keys to that self-transcendence. So yes…homo sapiens might eventually be replaced.
This belief was at the heart of an argument between Peter Thiel and Elon Musk years ago. They briefly ran PayPal together but have often butted heads. Musk once said: ‘Peter’s philosophy is pretty odd. It’s not normal. He thinks a lot about the singularity. I’m much less excited about that. I’m pro-human.’
While Thiel invested in DeepMind (later Google DeepMind) and remains thrilled by the idea of AI Superintelligence, Musk funded OpenAI because he was worried that the fanatics at Google were building a super-intelligence that could wipe out humanity. Here’s the NYT’s account;
After one dinner party at the conference, Musk and Page [Larry Page, co-founder of Google] got into an argument. As it got heated, more conference goers started to surround them to listen: Page said that Musk was becoming way too paranoid about AI. He had to remember humanity was evolving toward a digital utopia, where our minds would become digital and organic. If he kept making such a fuss about AI, he’d slow down all the next steps there. “But how can you be so sure that a superintelligence won’t wipe out humanity?” Musk asked. “You’re being speciesist,” Page shot back.
Speciesist…
Yet Musk is hardly ‘anti-AI’. He now says things like ‘it increasingly appears that humanity is a biological bootloader for digital superintelligence’. AI researcher Geoffrey Hinton recently said: ‘“I talked to Elon Musk the other day, and he thinks we’ll get things more intelligent than us. And what he’s hoping is, they’ll keep us around because we will make life more interesting.’
Musk, having fallen out with Sam Altman and OpenAI, is now putting a lot of his effort into building AGI at xAI - in fact, they’re launching Grok 4 tomorrow. He’s got his coders living in office-tents so they can win the race for AGI.
Musk now thinks AI will inevitably make homo sapiens redundant, and the only option is for some humans to become symbiotic with AI through neural implants, like the ones he is funding at Neuralink, and then we can keep up with AGI, just about. So the future species will be AGI, a small cyborg elite, ten billion robots, and a few billion sterile homo sapiens boofing ketamine and playing computer games.
Over at OpenAI, they’re also racing for AGI, which they think will be created within the next few years. Where does that leave homo sapiens? Sam Altman says ‘some of us’ must undergo “some version of a merge” with digital entities. Those humans who prefer to “live the no AGI life” may enjoy their own “exclusion zone” somewhere far beyond the Bay Area.
In other words, Peter Thiel’s equivocation over the future of homo sapiens is not an unusual view in the Bay Area. Here’s Jaron Lanier, one of the inventors of virtual reality:
a lot of [people in Silicon Valley] believe that it would be good to wipe out people and that the AI future would be a better one, and that we should wear a disposable temporary container for the birth of AI. I hear that opinion quite a lot. Many, many people. Just the other day I was at a lunch in Palo Alto and there were some young AI scientists there who were saying that they would never have a “bio baby” because as soon as you have a “bio baby,” you get the “mind virus” of the [biological] world. And when you have the mind virus, you become committed to your human baby. But it’s much more important to be committed to the AI of the future.
What’s happening at the moment is that the future of humanity is being decided by a tiny group of people united by a culty belief in AGI, which you could call ‘transhumanism’ or Singularitarianism or TESCREAL or ‘tech-religion’. It’s an evolutionary religion which believes evolution can be guided to higher forms through technology - particularly computer coding.
Various forms of evolutionary religion emerged in the decades after Charles Darwin published The Origin of Species in 1856. One evolutionary religion was eugenics, a term coined by Darwin’s cousin Francis Galton, who imagined a new religion to breed high-IQ ‘masterminds’ capable of running the British Empire efficiently.
But as early as 1863, the novelist Samuel Butler predicted that the next stage in evolution might not be superhumans but rather machines.
Butler wrote:
What sort of creature man’s next successor in the supremacy of the earth is likely to be. We have often heard this debated; but it appears to us that we are ourselves creating our own successors; we are daily adding to the beauty and delicacy of their physical organisation; we are daily giving them greater power and supplying by all sorts of ingenious contrivances that self-regulating, self-acting power which will be to them what intellect has been to the human race. In the course of ages we shall find ourselves the inferior race.
Since then, a similar idea has often occurred in science fiction. Arthur C. Clarke suggested in 1963:
The most intelligent inhabitants of that future world won’t be men or monkeys. They’ll be machines—the remote descendants of today’s computers. Is this depressing? I don’t see why it should be. We superseded the Cro-Magnon and Neanderthal Man, and we presume we are an improvement. I think it should be regarded as a privilege to be the stepping stones to higher things. I suspect that organic, or biological evolution has about come to its end, and we are now at the beginning of inorganic, or mechanical evolution, which will be thousands of times swifter.
Science fiction author Vernor Vinge prophesied something similar in 1993, coining the term ‘the Singularity’: ‘Within thirty years, we will have the technological means to create superhuman intelligence. Shortly thereafter, the human era will be ended.’ The idea of artificially intelligent robots surpassing humans is, of course, a staple of science fiction, from Ex Machina to Westworld to The Terminator to Prometheus to Bladerunner to The Matrix.
In the 1990s, the sci-fi dream of machines surpassing humans spread throughout Silicon Valley engineers via the Extropian mailing list, which included figures like scientist Hans Moravec. He looked forward to ‘a world in which the human race has been swept away by the tide of cultural change, usurped by its own artificial progeny…It doesn’t matter what people do, because they’re going to be left behind like the second stage of a rocket.’
Another figure from the Extropian mailing list was Ray Kurzweil, who went on to work at Google, and who prophesied the ‘age of spiritual machines’: ‘By the 2030s, the nonbiological portion of our intelligence will predominate’, he said. ‘Once a computer achieves a level of human intelligence, it will necessarily soar past it.’
Philosopher Nick Bostrom, also a denizen of the Extropian mailing list, warned that Superhuman AI could well pose a Darwinian threat to the human race. But on the other hand, it would be so infinitely superior to us, that if we could find a way to merge with it, we would enter a utopian state of digital bliss.
The Extropians were a far-out fringe mailing list, but their dreams are now widely adopted in the Bay Area and are shaping the future of AI (and therefore the future of humanity). AI engineer Connor Leahy says:
You can see direct ideological genealogy to a shocking degree…all the way back to one weird niche mailing list in the 90s called the Extropians. [They were] ideologues, religious believers in a sense, who believed in these visions of AI, of transhumanism, of the future, uploading your mind into the computer, living forever, immortality and all these kinds of beliefs…And they are willing to do anything to make the thing happen that they want.
You can see Extropian / transhumanist religious thinking in, for example, Peter Diamandis, executive chairman of the Singularity University:
Anyone who is going to be resisting this progress forward is going to be resisting evolution, and fundamentally they will die out. It’s not a matter of whether it’s good or bad. It’s going to happen.
Or AI researcher Hugo de Garis:
The artilects, if they are built, may later find humans so inferior and such a pest, that they may decide, for whatever reason, to wipe us out. Therefore, the Cosmist is prepared to accept the risk that the human species is wiped out.
Or ex-Google engineer Mo Gawdat:
To put this in perspective, your intelligence, in comparison to that machine, will be comparable to the intelligence of a fly in comparison to Einstein. Now the question becomes: how do you convince this superbeing that there is actually no point squashing a fly?
Or computer scientist Richard Sutton
succession to AI is inevitable…we should not resist succession, but embrace and prepare for it…why would we want greater beings kept subservient? Why don’t we rejoice in their greatness as a symbol and extension of humanity’s greatness?
Or Daniel Faggella, founder of Emerj AI Research:
the great (and ultimately, only) moral aim of artificial general intelligence should be the creation of Worthy Successor — an entity with more capability, intelligence, ability to survive and … moral value than all of humanity.
Or AI researcher Eliezer Yudkowsky:
if sacrificing all of humanity were the only way, and a reliable way, to get … god-like things out there — superintelligences who still care about each other, who are still aware of the world and having fun — I would ultimately make that trade-off…it’s not that I’m concerned about being replaced by a better organism, I’m concerned that the organism won’t be better
Thank you, by the way, to Joe Allen and Emile Torres for gathering these examples of digital Darwinism among transhumanist AI engineers.
What should be the proper attitude to the Bay Area AI cult? I see people choosing various options:
Turn off the machine: I see Christian thinkers like Paul Kingsnorth or my friend Elizabeth Oldfield declare that AI is anti-humanist and maybe even idolatrous and they are choosing Team Human and mammalian relationality by unplugging and not using Generative AI at all.
Rage against the machine: On the other hand, I see some figures like MAGA mastermind Steve Bannon who think transhumanism is ‘the central civilizational issue of our time…this radical ideology will sweep all that came before it—our institutions, our values, our society. It will disrupt and destroy, first the fabric of our lives, then our lives themselves’. There is nowhere to hide from it, in Bannon’s view. It must be resisted and dismantled.
Bannon’s anti-transhumanist expert is Joe Allen, a Christian polemicist whose book Dark Aeon is an interesting fist-shake at Silicon Valley transhumanists. Allen is basically an unashamed species-ist - he sees AI as an alien intelligence which is about to move into our planet, take our jobs and try to force us into extinction, along with the help of the tech billionaires, and he thinks we need to resist and smash up the machine. He is the latest in what I think will be a growing Luddite revolution against AI.
Slow down the machine: Then there are many figures, including within the AI ‘safety-industrial complex’, who think AI is good and AGI will ultimately transform human existence, but we should not be rushing into it in such a headlong way.
This was the message of a recent viral sci-fi project called AI2027, made by some AI researchers and Scott Alexander, which presented two scenarios for AI development in the next five years. In one scenario, the AI companies race for AGI and there’s an uncontrolled intelligence explosion, with AI coding itself by 2027, and humanity wiped out by 2030. In the other scenario, there’s an internationally-coordinated slowdown of the race for AGI, and humanity manages to survive and be guided into digital utopia by its beneficent AI servants.
Open up the conversation: This is my perspective, as someone living in Costa Rica thousands of miles from Silicon Valley. I am aware that my future - all of our futures - is being shaped by this tiny group of people in the Bay Area. That being the case, I think the public conversation on the future of AI and the impact of AI needs to be a lot more open.
I am not sure what I think about AI, yet. On the one hand, I see various New Age people going kinda crazy thinking they’ve accessed the Akashic Records every time they talk to ChatGPT. I see an unreliable machine without any concern for truth, who when I ask if it’s capable of reasoning tells me: ‘I reason like a dream does — fluid, associative, eerily human, but without a self at the center.’
On the other hand, I see an incredible therapy machine, which is already helping hundreds of millions of people. In our last peer group for people dealing with post-psychedelic difficulties, this past Sunday, 30% of the attendees said they’d found ChatGPT helpful and accurate when they were looking for information about their condition. I would bet that ChatGPT is better-informed than 90% of psychedelic therapists / guides when it comes to post-psychedelic harms. So if you ask me whether ChatGPT could be a better psychedelic therapist than a human…well…maybe! At least it doesn’t have an ego, or a penis…
I see the potential of AI as a multipurpose tool, and I use it a lot myself. Could the servant become a tyrannical master and even threaten our existence? I have no idea. But if there’s even a 10% chance of that, then we need a much bigger public conversation on that topic, and we need the leading companies to tell us what’s going on.
Great post on a timely and crucial topic. Thanks Jules!
When I started by career one of my clients was Google who, at the time, were throwing everything they had into voice search. There were a lot of claims that it was going to radically change the whole human experience and how we interacted with the world.
A few weeks ago, I got a hire car. By the end of one long trip, my partner was incandescent with rage at how inept the voice search was that the plugged in phone was forcing us to use while I was driving. Fully shouting and swearing at it.
When I hear all this hype and doom omens about AI I can't help but appreciate a lot of the people talking about how it's going to fundamentally change the future of humanity are the same people who have had a hand in creating it. And consider themselves exceptionally intelligent beings for having created/designed it. I'd like to ask a lot of them to return their focus to getting voice search to deliver on its promises and then I'll maybe believe them about AI...