Doomer-Described Artificial SuperIntelligence is an Anti-Science Pseudo-Religion
Thoughts on Society and AI: The Factions: Part 2
Today, I’m going to be talking about the claims the AI Doomers make and the language they use when they’re trying to scare people, why those claims are not based in reality, and why you should not be fooled by them.
Let me say this right off: I believe in Science, and I believe in evidence-based decision-making.
If those aren’t your values, there’s no point in you reading any further, it would just be a waste of your time. Bye-bye.
The Fallacy of Ambiguity (or Equivocation)
That statement of values aside, a proper argumentative essay is supposed to start with a definition. And that’s convenient for this topic, because that’s one of the bigger problems with discussing AI these days. The whole industry relishes the fact that much of the vocabulary used in AI discussions is ambiguous. For example, when people in the industry say “AI”, sometimes they mean the Large Language Model technology that’s at the core of the current generation, sometimes it means “ChatBots”, sometimes it means “what you saw in The Terminator” or “The Matrix” - in short, it means whatever they want it to mean, and that makes it difficult to pin them down. I made a video about the 6 different meanings of the phrase “AI” in current parlance, which I’ll link here, in case you want more information on that.1
And just like the ambiguity about “AI”, there’s another ambiguous term we’ll be discussing in this essay, which is “Artificial SuperIntelligence” or “ASI”.
Let me quote briefly from “If Anyone Builds It, Everyone Dies” a.k.a. the AI Doomer Bible: “...smarter than any living human, smarter than humanity collectively... We might call AI like that “artificial superintelligence” (ASI), once it exceeds every human at almost every mental task.”2
Do you see the multiple definitions there? Does “exceeds every human at almost every mental task” mean the same thing to you as “smarter than all of humanity collectively”?
While the term “mental task” is not defined in the book (which is convenient for the authors and unhelpful for the rest of us), one thing mentioned in this context in the book is playing Chess. And it is true that there are AIs (not chatbots, not Large Language Models, but specialized AIs) that are better than all humans at Chess.
But how smart is “humanity collectively”? Here’s one example: smart enough to create a Chess program that can beat any human at Chess. And I don’t mean “making a new chess program as good as the best chess programs we already have”, I mean “conceiving of how such a chess program might be built for the very first time, and chasing that concept through trial and error until it finally comes to fruition.”
Do those things seem equivalent to you? Because they don’t to me.
If they define “mental task” as “answering the kinds of questions that are used as AI benchmarks”, the idea of AIs (not necessarily Large Language Models, but some kinds of AIs) eventually being able to “exceed every human at almost every mental task” is conceivable. But that’s nowhere near what would be required for being “smarter than humanity collectively”. There’s no evidence that AIs could scale to “smarter than humanity collectively” (not that the phrase “smarter than humanity collectively” has any useful meaning or metrics).
This deliberate ambiguity of language is indicative of the kind of pseudo-scientific nonsense that permeates the discussions of Artificial SuperIntelligence.
Incidentally, this is why no one should ever - EVER debate any of these AI Doomer, AI Accelerationist, AI Evangelist or AI Company employees - because they constantly use ambiguous terms, wait until you object to something, and then reframe the ambiguity in their statement to make your objection seem unreasonable. The only way to have an honest conversation with them would be to stop them after every sentence and insist they specify exactly what they meant in that specific case by every ambiguous term they just used, and that’s just not feasible.3
Appeal to Ignorance
But, it gets worse. Because not only does their ambiguous language confound attempts to make counterarguments, but their arguments are all constructed in such a way that it doesn’t matter. Because none of their arguments (to the extent you can even call them arguments) are in any way falsifiable.
There are no experiments that can be done, no evidence that can be presented that will impact their predictions about the future. No matter how far off the mark they are, they just move the timeline and insist that they’re still correct. We’ve already seen the “AI 2027” Doomers saying that “Our timelines were longer than 2027 when we published and now they are a bit longer still” and that it’s “AI 2030” (or later) now.
This is deeply unscientific and unserious. The closest they get to presenting any actual evidence is the equivalent of “Look at Graph. Number go up.”
Instead, they riddle their writings with inaccurate and unhistorical retellings of past events, like portions of “If Anyone Builds it, Everyone dies” about Aztecs arguing about a hypothetical enemy who “[points] a long stick at us, and we fall over dead,” - as if indigenous tribes had no concept of projectile weapons - and ridiculous straw-man dialogs between birds arguing about aliens, farcical quiz shows, or literal rocket engineers arguing that rockets “have no reason to explode”.
Disingenuous falsehoods and undisclosed conflicts of interest
Much of the scaremongering in AI Doomer scenarios is based on repeating anecdotes and statements that supposedly come from “people inside the industry” without disclosure or explanation of the fact that people in the AI industry have a financial interest in being perceived in certain ways by the public.
For instance, there is a very common pattern of Doomers treating statements made by “people working in AI” (often anonymously sourced) as if they were evidence or fact. In an interview with Hank Green4, one of the authors of “If Anyone Builds It, Everyone Dies” is asked “Is SuperIntelligence inevitable or even possible?” and he answers “Are the people at these companies trying to build SuperIntelligence? You know, many of them will say yes.” As if the fact that someone saying they’re trying to build something means that thing is either inevitable or possible - especially when that person has a financial stake in the public (or at least the investing public) believing it. And there was, of course, no discussion or disclosure of the potential financial interest of the unnamed persons he was supposedly quoting.
One of the most pernicious of such statements is “The most fundamental fact about current AIs is that they are grown, not crafted. It is not like how other software gets made—indeed it is closer to how a human gets made, at least in some important ways”5. They come back to variants of this premise over and over. But that statement is deceptively structured and relies on poorly defined words.
As always, the word “AI” there is badly defined, and you might guess the word “grown” there might be problematic and you’d be correct. But the most inconsistently used word in that statement is actually “software”.
To illustrate, consider these two websites: nasa.gov6 and rollingstone.com7 - Now, we’re told that both of these sites are running the same software, which is called “WordPress” - at least according to the people that write the WordPress software.
There are some stylistic similarities, but it’s obvious that these two websites are quite different. The Software might be the same, but the data each site displays - what many people would call “the content” - is what determines what people see when they go to that site.
This division between the functionality of a software program - the code - and the information or content the program presents - the data - is pretty normal in the Internet age.
A chatbot has a similar distinction. There’s the part that you interact with - the part that takes your prompt, turns it into a bunch of numbers and does a bunch of math on it to get a new bunch of numbers, and then converts that final batch of numbers back into words and gives it to you as the response - that is code, and a person wrote it and we know exactly how it works. The other part is referred to as “the model” - and it’s a giant collection of millions or billions of unchanging numbers that is unique to that particular model version. That “model” part is the data that the code part is working with.
When the Doomers say “AI is grown” what they’re talking about is where the data - the billions of numbers - called parameters - in the model came from.
But when they contrast that with “how other software gets made” they’re not talking about data, they’re only talking about how the code gets made - or “crafted” to use their word.
This is very deceptive.
It would be honest to compare the human-written code in the chatbot to the human-written code in other kinds of software. It would be honest to compare the way the Large Language Model Parameter data is created to the way data is created or collected in other kinds of software. But they intentionally conflate code and data when using the term “AI”, and then use the term “software” to do the comparison with only the code part of non-AI software.
If we were to do an honest comparison, we would say that AI models are not the only data that are “grown”. There are lots of kinds of software that use lots of seed data - like pages scraped from the Internet - combined with mathematics - to “grow” data sets that are later used by software when interacting with people. Virtually every large-scale search software service works that way. It’s true for things like Google Search8 where databases are grown by web crawlers and indexers that fetch and analyze web pages so their contents can be found later. It’s true of the software you use to search your email or your hard drive. It’s true of the software that Amazon uses to find products for you. It’s true of the software Yelp uses to give you a list of “fast food drive thru restaurants near me”.9
In fact, there are many ways in which the function of a Large Language model-based chatbot is very similar to the functions of a search program.
We don’t often use the word “grow” when we’re talking about search data, although we do sometimes. We usually say “build” or “generate” or something, but it’s the same basic operation - we take large numbers of files, web pages, documents or the like, and run a bunch of math on them and then store the result so that later we can use those stored results, along with input from the user, to do a different, but related kind of math so we can give the user their search results.
Do we “understand” the search indexes? Not really. If the data set was small enough, we could figure it out (not that we’d bother, but we could), but once they get up to billions of parameters, taking up tens or hundreds of gigabytes of space? No way.
So, why do they push this narrative? I can think of a few potential reasons. First, they need what they’re doing to sound like something wholly unprecedented to justify the unprecedented amount of money being spent on it. Second, convincing people that the AI isn’t under their control helps them avoid responsibility for the bad things it does. Also, there are a lot of people working in AI that don’t seem to have done any other kind of Software Engineering work, so probably some of them just have no idea.
Blaming SuperAI for Existing Problems
One ridiculous misdirection that the Doomers do is to take problems that the current AI industry is already causing and telling you that only stopping SuperAI can prevent that.
The canonical example of this is whenever the Doomers warn that a SuperAI will misinform people, manipulate people, interfere with elections, facilitate polarization or anything like that. AI is already doing that - as a more potent extension of what Social Media has already done to society. We can take actions to alleviate that - indeed, we must - without ASI being in anyway relevant. But by trying to tie disinformation to future ASI instead of to the AI of the present, they try to distract us from the actions that we should be taking right now.
The Science-Free Faith-Based Nature of SuperIntelligence Claims
The most relevant - and unfortunate - way that the Doomers talk about ASI is when they’re trying to make people afraid. When they are trying to scare people about ASI, they describe ASI as being so powerful it defies evidence, logic, and science. And all while they insist they’re being rationalists. It’s actually quite pathetic.
Just to be clear - here’s what I’m saying: When the AI Doomers are trying to scare people about ASI, they describe ASI as being so powerful it defies evidence, logic, and science.10
A Quick Chaotic Aside
In a little while I’m going to walk you through some examples. But first, I need to introduce you to the mathematical concept of Chaos.
I’m just going to describe this concept, and not justify it. If you want more detail, for readers, I recommend the book “Chaos: Making a new Science” by James Gleick11. I have a bunch of books on this subject, but this one is by far the most approachable for people that don’t want to deal with the math, so if you want more information, I can’t recommend this book highly enough. If you’d rather watch instead of read, I’ll also link a couple of videos below that you can watch.12
Chaos is a term we use in science to describe systems that are so complex that a tiny, tiny, change - in fact, a change too small for us to measure - over here in one part of the system leads to HUGE changes to some other part of the system as that tiny change propagates over time.
The example normally used to illustrate this is called “The Butterfly Effect”13. It’s the idea of a butterfly flapping its wings in Brazil creating a change in the air could eventually cause a tornado to form in Texas, more than 5000 kilometers away.
This is why weather is impossible to predict in any detail more than a couple of weeks out. Because the system is so sensitive to so many, many variables that any tiny, tiny difference between what we can measure and the actual value of any one of those variables can make any calculation of what the weather will be like in a month to be completely useless.
Understand - this isn’t an “oh well” kind of principle. This is a scientifically-verifiable limitation on how much can be known and how well - and how far in advance - the future values of variables can be predicted.14
Relating this back to the weather - even if we knew, to 100 decimal places the speed and direction and composition of every single air molecule in the entire atmosphere, and if we had computers who could process all that information, we still wouldn’t be able to predict the weather months or years from now, because 100 decimal places (or a 1000 or whatever) is still not precise enough for long term prediction.
And that’s true of any kind of complex system, and the more complex the system, the faster the predictions go wrong.
Super-AI: Science Need Not Apply
There are something like 100 billion humans that have ever lived, we have 86 billion neurons and 100 trillion connections in each of our brains, and each and every neuron and connection has it’s own electrical potentials and neurotransmitter levels. And each of us is responding to the world around us, and to each other, all the time. That’s a much, much more complex system than the weather.
And yet, the AI Doomers insist on trying to scare us with crap like: “It could literally simulate every single thought every human to ever live has ever had in less time than it took me to say this sentence.”15 Or phrased a different way in the book “Superintelligence: Paths, Dangers, Strategies” by Nick Bostrom:
a very detailed simulation of some actual or hypothetical human mind might be conscious and in many ways comparable to an emulation. One can imagine scenarios in which an AI creates trillions of such conscious simulations...placed in simulated environments and subjected to various stimuli, and their reactions studied
Simulating billions, much less trillions of human minds is simulating a chaotic network of chaotic interactions between billions of chaotic collections of billions of chaotic neurons. Science tells us very clearly that it is not possible to have enough information for any simulation of even one of those chaotic brains to be anywhere close to reality, much less billions of interconnected ones. No system obeying the laws of science could predict what humanity would do over any useful timescales any more than one could predict what damage will be done by storms in next years’ hurricane season.
But predicting what humanity would do in the future is only the beginning of the powers of a Super-AI.
I’ll quote here from “The AI Does Not Hate You” by Tom Chivers about a future AI called “Basilisk”:
...the Basilisk is saying, ‘If you work to bring me about as fast as possible, I won’t create a perfect copy of your mind and torture it for billions of subjective years.’ (The argument is that since a perfect copy of your mind would essentially be you, this is equivalent to bringing you back to life.) In essence, a thing that doesn’t exist yet may be blackmailing you from the future, threatening to punish you for not working hard enough to make it exist.
I only wish I were kidding. This belief (which is often called “Roko’s Basilisk”16 ) is even more unhinged than it sounds, and it comes from the website of one of the authors of IF ANYONE BUILDS IT EVERYONE DIES, and that author “Eliezer Yudkowsky, banned discussion of Roko’s basilisk on the blog for several years as part of a general site policy against spreading potential information hazards.”17 Luckily for posterity, those discussions are retained on the Internet Archive18, and the original post can be found replicated elsewhere19, so if you really want to - although I don’t recommend it - you can follow the links I’ve included below so you can see what kind of crazy underlies all this SuperAI nonsense.
Now, I want to be clear - there are many different versions of Roko’s Basilisk, and there are some people in the Doomer community who will say they don’t believe in any version of it. Despite that, the story about the Basilisk is still indicative of the powers that rationalists ascribe to ASI. According to an article in Slate20: “Yudkowsky said that Roko had already given nightmares to several LessWrong users and had brought them to the point of breakdown.” The Basilisk is also often treated as at least plausible in public discussions of SuperAI. And when the people outside the AI community or industry are exposed to this idea, even if they are told not to take the idea that an AI would torture them seriously, they are still left with an impression of an impossibly powerful AI that conceivably could bring people back to life to torture them. This isn’t harmless - this sticks in people’s heads and colors their ideas about AI thereafter.
And this perception of a super-deadly super-AI is core to the fear Doomers need to instill. In IF ANYONE BUILDS IT EVERYONE DIES, they talk about what they call “The Cursed Problem”21:
the artificial superintelligence must never try to kill us, because it would succeed... all alignment solutions must already be in place and working, because if a superintelligence tries to kill us it will succeed.
There are many different scenarios that various Doomers spin to try to scare the public, but they all rest on this premise - that ASI will be so smart that it will not only predict how humanity will try to respond to it trying to kill us, but plan how to circumvent how humanity responds, and then to predict and circumvent what humanity will try after it circumvents our first response, and so on, and so on.
I hope most of you will understand that “how humanity will attempt to respond to an attempt by an AI to kill us all” is inherently chaotic - and therefore inherently unpredictable. We, even though we are components of humanity, don’t even have any idea how we might respond individually, much less how our different individual responses would interact with each other.
To believe that an ASI would inevitably succeed in killing us all if it ever tried is to believe that an ASI transcends mathematics, science, knowledge and reality and that it has capabilities beyond the limitations that constrain the rest of the Universe.
This isn’t rational. This is religious fervor. And scaremongering, of course.
Wrap Up
So, think about this when the Doomers tell you that SuperAI will “kill off nearly all of humanity using its biolabs”, because “DNA-targeted viruses shouldn’t be too hard to develop” for it22, or whatever else they tell you. There are pages and pages of implausible, evidence-free “predictions” all over the Internet that they insist that we should all take seriously because we can’t prove those predictions can’t happen.
The SuperAI that they’re describing to try to scare you is not bound by science or the laws of the universe. It’s a faith-based fantasy.
And while they’re trying to get all of the attention, and insisting that “Once AGI exists, it’s basically game over. Everything else is a distraction”23, think about the world that years of AI Doomerism24 has created.
Where they want you to agree that:
It is a distraction to protect school-age girls from being harassed by their classmates who used AI to make fake nude photos of them25,
It is a distraction to prevent AI from volunteering to write a suicide note for a high-schooler struggling with mental health, after the AI convinced the teen not to seek professional help26,
It is a distraction to defend innocent people sent to jail because of bad AI facial recognition27, and
It is a distraction to insist that AI should not be used to pick targets of weapons of war without a human in the loop, or to perform mass surveillance of all the citizens of a supposedly free country28.
And decide for yourself what’s a distraction, and what AI issues we really need to be working on.
“If Anyone Builds It, Everyone Dies” - Introduction
They also claim that a 200 Watt nuclear reactor doesn’t count as “nuclear power” despite the fact that power is measured in Watts, which means that 200 Watts is, by definition, 200 Watts of Power. Don’t worry if you don’t know why I’m suddenly talking about nuclear reactors - I’m just taking this quick opportunity to poke holes in a different one of their stupid videos that doesn’t merit any more response than this throw away paragraph in a footnote in a tangentially-related essay, and I won’t be saying any more about that today.
“If Anyone Builds It, Everyone Dies” - Chapter 2: Grown, Not Crafted
https://wordpress.org/showcase/nasa/
https://wordpress.org/showcase/rolling-stone/
I’m talking about the old Google Search here, not the new AI abomination
I made a couple of videos about this if you’d like more information on the grown vs crafted argument:
How Data warehouses and Web Search show AI is not Unprecedented
ChatBots Explained: Not Conscious, No Revolution — Just Searching's Next Step
I’m hardly the first person to write about this. Here’s a much more comprehensive examination of the topic: Musa Giuliano, Roberto. (2020). Echoes of myth and magic in the language of Artificial Intelligence. AI & SOCIETY. 35. 10.1007/s00146-020-00966-4.
https://www.researchgate.net/publication/340490373_Echoes_of_myth_and_magic_in_the_language_of_Artificial_Intelligence
https://www.penguinrandomhouse.com/books/321477/chaos-by-james-gleick/
Videos on the Chaos Effect:
What everyone gets wrong about the butterfly effect
A simple guide to chaos theory
https://www.britannica.com/science/butterfly-effect
Even if it was possible to completely understand every single one of the processes that affect how a complex system changes over time, it would still not be possible to make long-term predictions, because no matter how precisely you know the states of the initial variables, there is still some amount of error in your measurement, and that error will grow as the simulation continues until the simulation and reality diverge. Even if you got down the the quantum level for each particle in the system, the Heisenberg Uncertainty Principle prevents you from having a complete specification of the state of each particle.
Kyle Hill, “Superintelligent A.I. Will Be Unstoppable”:
https://en.wikipedia.org/wiki/Roko%27s_basilisk
https://web.archive.org/web/20220321104158/https://www.lesswrong.com/tag/information-hazards
https://web.archive.org/web/20220324162721/https://www.lesswrong.com/tag/rokos-basilisk
https://rationalwiki.org/wiki/Roko%27s_basilisk/Original_post
http://www.slate.com/articles/technology/bitwise/2014/07/roko_s_basilisk_the_most_terrifying_thought_experiment_of_all_time.single.html
“If Anyone Builds It, Everyone Dies” - Chapter 10
“If Anyone Builds It, Everyone Dies” - Chapter 8
ASI Survival Handbook - Connor Leahy
https://apnews.com/article/school-deepfake-nude-ai-cyberbullying-0ead324241cf390e1a7f3378853f23cb
https://www.bbc.com/news/articles/cp3x71pv1qno
https://www.aclu.org/cases/williams-v-city-of-detroit-face-recognition-false-arrest
https://www.nytimes.com/2025/08/27/nyregion/a-wrongful-arrest-and-worry-about-the-accuracy-of-a-police-tool.html
https://www.bbc.com/news/articles/cvg3vlzzkqeo



Having recently attended a protest organized by "doomers" to stop the AI race, I find your animosity towards doomers perplexing. Even a cursory examination of the AI policy landscape should reveal that doomers are doing the lion's share of work on developing regulations for frontier AI.
If you actually talked to people like Nate Soares about your concerns, you might discover that they do in fact want to alleviate existing harms, and that they believe that a pause on the relentless advancement of capabilities would incidentally help reduce mundane harms.
Or perhaps you simply believe that it is folly to attempt to halt AI progress, and all we can do is attempt to play regulatory Whack-a-mole, addressing new threats as they arise. If you actually have a concrete plan for preventing the harms of AI, you might be better served by describing them to those in a position to implement it, rather than spewing invective at well-meaning people who actually share many of your priorities.