Unignited Atmospheres
And other reasons to be thankful
In Dynomight’s classic Thanksgiving listicle he gives as the number one Underrated Reason To Be Thankful:
That our atmosphere has low enough pressure and levels of [physics things] that the first nuclear bomb test in 1945 didn’t in fact ignite the atmosphere and engulf the planet in flames, which was still a bit of an open question when it happened.
This question was a plot point in the 2023 movie Oppenheimer. Of course we know, ex post, that nuclear bombs don’t set off a chain reaction through the whole atmosphere and annihilate all life on earth. But how likely was it, ex ante? The Manhattan Project physicists did the math and determined that it was “unreasonable to expect” that nitrogen in the air could sustain a chain reaction and “even less likely” that it could do so unlimitedly. “However,” they concluded, “the complexity of the argument and the absence of satisfactory experimental foundations makes further work on the subject highly desirable.”
Funny story, though. There was at least enough doubt that, to quote Scott Alexander’s telling of it in his review of Toby Ord’s The Precipice:
When the Trinity test produced a brighter fireball than expected, Manhattan Project administrator James Conant was “overcome with dread”, believing that atmospheric ignition had happened after all and the Earth had only seconds left.
There was, separately, as part of work on the hydrogen bomb, a similar but less consequential concern about lithium-7. The physicists did the math on that as well and concluded that lithium-7 could not sustain a fusion reaction. This one they got wrong. When testing the hydrogen bomb in the 1950s, the fuel was contaminated with lithium-7, yielding a much bigger explosion than expected and killing some bystanders. “Of the two major thermonuclear calculations made that summer at Berkeley,” says Ord, “they got one right and one wrong.” As Scott Alexander put it, “this doesn’t really seem like the kind of crazy anecdote you could tell in a civilization that was taking existential risk seriously enough.”
Here are two more Underrated Reasons To Be Thankful:
“That all the types of atoms that can sustain a fission chain reaction are heavy and so tend to spontaneously decay and stop existing, plus the earth is very old, meaning that none of those atoms exist on Earth anymore except uranium-235.”
“That uranium-235 only comes mixed with 100x as much uranium-238 which is chemically indistinguishable, and the best known way to separate them is to mix it with fluoride to make a gas and then spin that gas for months at 500 meters per second [over a thousand mph], which is obscenely hard, thank god, although the French are developing an ingenious chemical enrichment method, which, why, why are you doing that, it would move enrichment to the domain of standard chemical engineering technology, please stop.”
Amen. At some point in the no-one-knows-how-far-or-imminent future, AI is going to surpass humans in terms of general intelligence and planning and problem-solving abilities and when that happens things may (may! we don’t need to be certain) spiral out of our control and we all literally die.
That sounds sci-fi-ish and it’s hard to explain why all the obvious counterarguments (if it’s so smart shouldn’t it know its purpose is to benefit humanity, what about Asimov’s laws of robotics, can’t we just turn it off, etc etc) fail. So I like (“like”) Dynomight’s example of an ingenious way to enrich uranium with vastly less effort because it’s a nice concrete example of something ChatGPT 7ish, conceivably, could come up with that could doom humanity. Even without the AI itself going off the rails, it could become such a powerful tool that it democratizes weapons of mass destruction.
Here’s another terrifying excerpt from The Precipice, not specific to AI:
Every year as we invent new technologies, we may have a chance of stumbling across something that offers the destructive power of the atomic bomb or a deadly pandemic, but which turns out to be easy to produce from everyday materials. Discovering even one such technology might be enough to make the continued existence of human civilization impossible.
I want to acknowledge how speculative all of this is. I don’t understand the certainty of some doomers. In If Anyone Builds It, Everyone Dies, Yudkowsky and Soares lean hard on evolutionary arguments. They describe compelling parallels between how human brains evolved and how we’re “growing” these artificial neural networks. But there are differences which may or may not be fundamental. We just don’t know yet. So what’s scary is that the arguments may be right. We have to understand these AI models better before we’ll know. You can’t just gesture at differences between artificial and natural neural networks as if that’s a reason to dismiss all the worries! The ways in which we might achieve superintelligence without it going disastrously wrong are, if anything, even more speculative. As reasons to be hopeful, sure.
This is what I keep harping on. Our level of understanding is abysmal. We’re constantly surprised by the capabilities that emerge from these things as we scale them up. Sometimes we’re surprised by their lack of certain capabilities. How can they be this good at completely new math problems but barely able to use a mouse? In hindsight these disparities may make sense but we can’t seem to predict them. We don’t know how soon AGI and the ability to recursively self-improve will emerge. And when it does, we have little idea how that plays out.
“It’s just speculation”
To go full sci-fi and doom for a minute, it’s like we’re in a spaceship and wondering what will happen if we fly into a black hole. No one has ever done that before so the theory that we’ll be torn apart into our constituent atoms and smeared across space-time never to return is “just speculation”. How do you know we won’t pop out of a different black hole on the other side of the galaxy? Technically we don’t know until we try!
What skews me more doomy despite the high uncertainty with future AI is how high the stakes are. In 1933 it was “just speculation” that nuclear chain reactions were possible. Turns out they were! Then it was “just speculation” that we could accidentally ignite the entire atmosphere. That one turned out false. But imagine if we figured out how to start nuclear chain reactions without deeply understanding the underlying physics. In that case you really need to pause the experiments until you can rule out the possibility of a runaway chain reaction that eats the whole earth.
I hope these analogies don’t backfire for the people who need to hear them. Some people fixate on the part where the previously speculated doomsday scenarios turned out to be wrong. Or people posit psychological reasons why someone would foretell an apocalypse and feel satisfied that that means we’re safe. Perhaps we should expunge the doomy terminology from our vocabulary and frame this purely as solving the urgent scientific and technical problem of specifying goals for an advanced AI that keep it aligned with humanity’s goals.
This AGI Friday is adapted from a couple entries in a private mailing list I created last year called Thankful Thursdays aka Thursgiving. Sorry for jumping the gun, holiday-wise. Happy Halloween! Also thanks to Gabe Hayos in particular for helpful discussion.


I'm trying out Substack's chat feature over at https://substack.com/chat/4048552 but let me try repeating it here in case this is a more natural discussion area:
For this AGI Friday I was tempted to kvetch about a short blog post by Anil Dash. But maybe that works better as a back-and-forth debate with people who are sympathetic to his view. Does that include any of you? Here's his post: https://www.anildash.com/2025/10/17/the-majority-ai-view/
And here's my opening salvo:
It starts out superficially reasonable-seeming: lots of people think AI is overhyped and don't think AGI is close. True and fine. But then he (a) subtly shifts to mocking or at least thoughtlessly dismissing the very concept of AGI and (b) opines that everyone who agrees with him is being silenced by authoritarianism or something. Like everyone in tech has to pretend to be a doomer or an accelerationist or they'll be blacklisted. Those on social media, according to Anil Dash, who profess to believe in transformative AI are just "hustle bros" and wannabe influencers, like the web3/metaverse/blockchain promoters before them.
(Also he praises the infuriating "AI snake oil" people, now rebranded to "AI as normal technology". See Scott Alexander's "AI as profoundly abnormal technology" for a pretty impeccable rebuttal.)