Silicon Valley Has a God Problem
And we're all up for sacrifice.

Hi,
I try not to talk about "AI" too much on Webworm, because I think it's being shoved down our throats enough already. I, for one, am sick to death of it.
But today on Webworm, I'm really happy to bring you this incisive, slightly terrifying piece from Joshua Drummond about the beliefs of those behind it. It's a story I haven't really seen told before. Not this succinctly.
I suggest you find some quiet time to take this in, because it's a lot. But I think it's really, really important.
But just quickly, a photo from another important thing: yesterday's megastrike in New Zealand – featuring Webworm reader – and more importantly, healthcare worker – Carolyn. She's even wearing her custom I HAVE WORMS hat from one of our Webworm popups.
Over 100,000 workers took part, making it New Zealand's "biggest labour action in four decades." Carolyn had this to say to Webworm:
"It was incredible being part of this historic rally – everyone from all walks of life rallying together to support each other and to stand up for workers rights! There were inflatable frog and dinosaur dress ups… the firefighters brought their trucks and put the sirens on, and people were flirting with them through the megaphone. A big resounding 'fuck you' to this government, who puts corporate profits over people."
And with that resounding "fuck you", onto the disturbing beliefs behind our descent into AI chaos.
David.
If you want to support Webworm, and you haven't already, consider signing up as a paid member. This goes towards things like Webworm's legal defence fund & paying guest writers like today. Webworm is 100% reader supported, with no corporate backers or advertising.
Sacrificing Everything To The New Gods
How OpenAI's Sam Altman, Meta's Mark Zuckerberg & former Google CEO Dr Eric Schmidt believe in something that will kill us all.
by Joshua Drummond
If you are alive and relatively normal in the year 2025, you might be forgiven for thinking that tech leaders have gone mad.
You would be right. But the true breadth and depth of that insanity is… difficult to convey.
The problem with writing about insane beliefs is that the writer inevitably also sounds insane. And some of the beliefs I’m describing are like trying to transcribe the sounds made by a herd of screaming goats. There are too many screams to choose from and none of it makes any sense.

But if I was to translate the overall vibe of that scream, it's that the tech industry claims it is about to develop a kind of super-AI, called – appropriately enough – superintelligence.
Here is OpenAI founder Sam Altman:
"We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence.
Systems that start to point to AGI are coming into view, and so we think it’s important to understand the moment we are in."
Those two terms – superintelligence and AGI (Artificial General Intelligence) – are unhelpfully difficult to define. They might be best understood as "nonsense words to juice up the market". But I'll try for an honest definition.
AGI is usually held to mean "a computer system that can do anything a human can do" and superintelligence means "a hypothetical agent that possesses intelligence surpassing that of the brightest and most gifted human minds." This mind would supersede human intelligence, both individual and collective, by an order of magnitude. It would be a being that could build anything and solve any problem.
In other words: a god.
The planet's tech leaders have got religion.
And in the world's storied history of inventing mad, cruel, and deeply stupid religions, this new one might be the cruelest, the most dangerous, and the stupidest religion of all.
Bullshitters Usually Win
As with several religions, there’s a problem at the core.
There is no credible evidence to suggest we're "close to superintelligence," or anything like it.
One issue is that there is no plausible method through which superintelligence or AGI can arise from the technology that we refer to by the marketing term "AI".
Large Language Models, as we have explored on Webworm, are arguably less intelligent than a fruit fly. LLMs cannot think. They are text prediction engines, whose job is to make up a plausible response to a user-entered prompt. If you want a fun example of just how intelligent LLMs are, get one ChatGPT account to talk to another one and watch it almost immediately spiral into an infinite loop of nonsense.
And I am becoming annoyed as I write, because this topic has been done to death. We know what LLMs are! We know how they work! It should be enough to know that LLMs are not intelligent. But it isn't, because anyone attempting to take this topic seriously immediately runs into the bullshit asymmetry principle: “The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.”
It combines with another issue: bullshitters often win, because — as evidenced by how Sam Altman’s ridiculous statements are taken as gospel by a credulous press and public — they can project absolute blithe certainty, whereas anyone handicapped by intellectual honesty has to admit blind spots and doubt.
So, instead of yet another article debunking belief in superintelligence arising from LLMs, you will have to take my prior word (and a neuroscientist’s word) for it. Plenty of others have done the debunking better than I can.
Instead, let’s look at where these beliefs come from.
The Worst Acronym Ever
There's an acronym you should become familiar with, because it sums up Silicon Valley's new religion: TESCREAL.
Coined by computer scientist Timnit Gebru and philosopher Émile P. Torres, it stands for “Transhumanism, Extropianism, Singulatarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism”.
You can be forgiven for not knowing any of those weird names for what I can promise you are much weirder concepts. Each is its own school of thought, but they interconnect with each other, forming a smorgasbord of bizarre beliefs that believers can pick and choose from.
Each of these concepts is worth understanding in detail, because only then does the true madness become apparent. But for brevity, I will attempt to summarise it all very quickly:
Transhumanists, Extropians, Singulatarians, and Cosmists believe, with some differences, that we can become literally super-human, very long-lived, or even fully immortal with sufficient technological and biological augmentation - enabling space colonisation and uploading our minds to computers.

All this becomes possible thanks to hypothetical versions of nanotechnology and the advent of AI superintelligence .
Meanwhile, capital-R Rationalism is less like traditional rationalism (making decisions based on evidence) and more a secular cult that believes (irrationally) that superintelligence is Public Enemy Number One. One of its founding texts is a 660,000 word, 122 chapter fan-fiction called Harry Potter and the Methods of Rationality, which should tell you everything you need to know.
Effective Altruists are OK if you don't dig too deep; they believe in maximising the greatest good for the greatest number of people through effective charitable giving and other endeavours. Which seems fine, until it morphs into Longtermism, the weirdest one yet.
Lifting a little from each previous category, and using some astonishingly specious maths and reasoning, longtermists believe that the advent of superintelligence will enable humans to colonise space, the stars, distant galaxies, and eventually the entire universe, and that the lives of all these trillions of hypothetical humans living in space and in computers are of such infinite value that our lives are comparatively worthless – unless, of course, a life is dedicated to bringing this staggeringly unlikely future about. Longtermists therefore advocate for spending huge amounts of humanity's resources on space colonisation, nanotechnology, AI, and creating superintelligence.
Superintelligence as Faith
If you're looking at that list of strange beliefs and thinking "that sounds a lot like science fiction!" you'd be right. It does sound like science fiction, because – with a few quibbles and caveats – it is; derived and expanded from everything from Neuromancer to Iain M. Banks’ Culture series to Star Trek. And it’s not the authors’ fault that nerds have mistaken their stories for reality. Journalist and astrophysicist Adam Becker's excellent non-fiction book More Everything Forever goes into the claims made by TESCREAL adherents, as well as those of related beliefs, and finds practically all of them wildly implausible.
To summarise; nanotechnology cannot do anything like what TESCREALs expect, and likely never will; space exploration and colonisation simply cannot proceed in the manner they envisage; there is no evidence that we will ever be able to upload human consciousness to computers, much less anytime soon; and – most importantly – there is no plausible path from LLMs to even human-level intelligence, much less Artificial General Intelligence or superintelligence.
Of course, these inconvenient truths haven't stopped the industry’s hype-men like Sam Altman:
"OpenAI is a lot of things now, but before anything else, we are a superintelligence research company. We have a lot of work in front of us, but most of the path in front of us is now lit, and the dark areas are receding fast.
It is possible that we will have superintelligence in a few thousand days."
The fact that it's all sci-fi doesn't seem to matter. Superintelligence – both the promise of it and the supposed existential threat it poses – is an article of faith for some of the richest, most powerful people in the world.
But what, you ask, of non-hypothetical, actually existing existential threats, to individuals, communities, and the globe as a whole? Things like poverty, conflict, nuclear war, and climate change?
All that, according to our tech overlords and their prophets, can get in the bin.

“The arrival of an alien intelligence"
"For hundreds of years, we properly glorified technology – until recently," writes venture capitalist and Substack investor Andreesen Horowitz in a lengthy Gospel-riffing rant, titled The Techno-Optimist Manifesto. "I am here to bring the good news. We can advance to a far superior way of living, and of being."
“If we end up misspending a couple of hundred billion dollars,” says Meta CEO Mark Zuckerberg, fresh from misspending fifty billion dollars on the Metaverse, “that’s unfortunate. But I think the risk is higher on the other side. If you build too slowly and then superintelligence is possible… then you’re just out of position on what I think is going to be the most important technology that enables the most new products and innovation and value creation in history.”
"The demand is infinite," says former Google CEO Dr Eric Schmidt. "All that [energy efficiency] will be swamped by the enormous needs of this new technology, because it's a universal technology, and because it's the arrival of an alien intelligence."
Schmidt is doing a "fireside chat" in his role as Chair of the Special Competitive Studies project, and he's explaining why we need to abandon efforts to be more efficient with energy, reduce greenhouse gas emissions, and mitigate climate change. AI's "infinite demand" requires us to build data centre after data centre, endless fields of Graphics Processing Units, all working together to produce endless video slop "unprecedented breakthroughs."
"We're not going to hit the climate goals anyway," Eric says. "Because we're not organised to do it. Yes, the energy needs in this area will be a problem, but I'd rather bet on AI solving the problem than constraining it."
We must, Eric is saying, sacrifice the climate to this new god. Only then can the new god save us from the climate.
"I do guess that a lot of the world gets covered in data centres over time," says Sam Altman, on a podcast with Theo Von. "Or maybe we put them in space. Like we build a big Dyson sphere around the solar system."


Here, Sam is pivoting from his vision of global giga-ecocide in the service of the god he claims to be building, to musing about building the god in space, in a Dyson sphere — which is an artificial sphere encasing the entire sun.
It must be asked, at this point: do these people believe any of what they're saying?
I think, to an extent, they do. Not necessarily literally – if Sam does not realise that his sun-spanning sphere cannot be built, he's delusional past the point of insanity – but in another way, that's much more destructive than mere delusion.
In the film Groundhog Day, Bill Murray's character acts out when he realises that he is trapped in an infinite time loop and therefore his actions do not have consequences.
Belief in the godhood of AI has enabled tech billionaires to do the same thing. If their new god will save us, then any excesses — the despoilment of the biosphere, collusion with a fascist regime, the endless pursuit of more billions, more power – are excusable.
AI is their shithead license.
Sacrificing The Universe (No, Really)
In addition to the printing of licenses for the rich, powerful and insufferable to be ever more rich, powerful and insufferable, there's another vital factor at play.
The AI bubble – for bubble it surely is – is the latest in a series of useful technologies (the cloud, Software as a Service, videogames) and grifts (NFTs, crypto, the "metaverse") that have two aims:
1. selling CPUs and GPUs and building datacentres to run them.
2. sustaining the impression that the tech industry can and should keep growing indefinitely, meaning the tech industry's shares can be growth stock forever.
"Growth stock" is worth much more than normal shares, because it provides a much larger return on investment for whoever holds it. And the tech industry - which has grown to a remarkable degree, to the point that it now dominates many aspects of the lives of nearly everyone on earth - lives in fear of its growth stopping, as that would markedly reduce the value of its stock.
It has much in common with the reason why fossil fuel companies will not allow renewable energy to supplant oil, coal and gas; they have invested trillions of dollars in their deadly infrastructure and if renewable energy gets too cheap, they will lose their investment. It's as simple as that.
To keep their assets from being stranded, they lobby, steal, and lie; they screw elections, install puppet leaders, and create wars – both actual wars and the culture wars we know so well. Even the current culture war against trans people was concocted largely by think tanks with connections to the fossil fuel industry.
Now the tech industry, long (erroneously) perceived as environmentally-concerned, has done a deal with the fossil fuel devil; its new mega-datacentres are powered by carbon-spewing gas turbines that roar day and night to create Grok's Hitler-praising gibberish and Sora's endless torrent of video slop.
And, we are told, we must make this putrid burnt offering forever, for if we do not, god will never exist, and it will never save us. It will never supernaturally solve climate change; it will not rapture our digital souls to a state of eternal bliss in the Cloud; it will never allow us to fulfil our manifest destiny in the heavens.
If that sounds somehow familiar, it should: belief in the emergence of a superintelligent AI god is a near 1:1 swap for the tenets of fundamentalist, evangelical Christianity.
This should worry everyone, Christian and secular. If you're Christian, this is Tower of Babel stuff, textbook blasphemy. Raising up false gods is as sinful as sin gets. And if you're not a Christian, you should probably be worried about — as I wrote in my last article on AI — sacrificing the universe's only known biosphere on the altar of making lines go up.
Because that’s what they’re doing.

Their New God Will Save Them (It Won’t)
While there are plenty of uses, good and bad, for LLMs and the other technologies riding under the banner of AI, none of the endgames proposed by the adherents of the new AI religion make the slightest sense.
"Altman apparently wants to make the United States into one enormous company town, with shares in OpenAI replacing the dollar…" writes Adam Becker, in More Everything Forever. "This is a proposal for total capture of the national economy, making Altman functionally the king of the United States and possibly the world."
This absurd scenario is only possible if OpenAI can develop AGI and superintelligence – which, as discussed, it can't. Given this, their current actions seem more like setting money on fire.
The more moderate scenario, advocated by the likes of Anthropic CEO Dario Amodei, is that the extraordinary productivity gains of AI allow corporations to lay off ever-vaster numbers of employees. Dario suggested in June 2024 that the unemployment rate could eliminate half of entry-level jobs and increase unemployment to 20 percent. Other estimates run higher.
This seems increasingly unlikely, given the data on so-called "AI transformations" are starting to come in and are often, to put it mildly, utter dogshit.
But even if it did happen; a world with 20% unemployment under a moderate AI scenario, and mass unemployment under an (impossible) AGI or superintelligence scenario, simply won't work. Unemployment on that scale would annihilate the global economy, outpacing Covid by an order or two of magnitude – and that's before you bring unsolved climate change into the equation.
And climate change must be accounted for. No matter how much delusional oligarchs would like it to, growth – coupled as it currently is to energy use – simply cannot continue forever. Their religious fervor for growth is a broad road to hell, and if we take that path, we will burn.
I am, unfortunately, being literal, not figurative.
The facts of physics do not care about our feelings. If, as More Everything Forever points out, growth stays coupled to energy use, and our annual energy usage increases at anything like our current rate of three percent per year, within 400 years we will be “using as much energy as the Sun provides to the entire surface of the Earth annually."
Never mind the icecaps melting; that much heat would boil the oceans. It would create a near-total extinction event, and not because of an AI superintelligence; because of the opposite.
Sheer human stupidity.
And it's this, more than anything, that makes me think that tech oligarchs are true believers in their terrible new religion, perhaps the worst one yet. They can do maths. They know the path they have put us on.
They really must think that growth can continue forever, because they are creating God, and their new deity will allow them to escape the laws of physics — or at least spirit them to digital paradise before they face the appalling consequences of their actions.
They're wrong, but if they're not stopped, it won't matter.
-Joshua Drummond.
PS: A documentary I executive produced called Mockbuster is having its world premiere in Australia tonight, before heading to film festivals in 2026, and getting a theatrical run in the US. It's made by some fresh doc makers out of Australia who I really liked, and I'm stoked to see their vision come to fruition. I don't want to do any spoilers, but central character Anthony is tapped to direct an Asylum film, the studio responsible for incredibly c-grade films like Sharknado and Sharknado 2: The Second One. What happens? Well, chaos.
