If Anyone Builds It, Everyone Dies Audiobook By Eliezer Yudkowsky, Nate Soares cover art

If Anyone Builds It, Everyone Dies

Why Superhuman AI Would Kill Us All

Preview

Audible Standard 30-day free trial

Try Standard free
Select 1 audiobook a month from our entire collection of titles.
Yours as long as you’re a member.
Get unlimited access to bingeable podcasts.
Standard auto renews for $8.99 a month after 30 days. Cancel anytime.

If Anyone Builds It, Everyone Dies

By: Eliezer Yudkowsky, Nate Soares
Narrated by: Rafe Beckley
Try Standard free

$8.99 a month after 30 days. Cancel anytime.

Buy for $22.49

Buy for $22.49

"May prove to be the most important book of our time.”—Tim Urban, Wait But Why

The scramble to create superhuman AI has put us on the path to extinction—but it’s not too late to change course, as two of the field’s earliest researchers explain in this clarion call for humanity.

In 2023, hundreds of AI luminaries signed an open letter warning that artificial intelligence poses a serious risk of human extinction. Since then, the AI race has only intensified. Companies and countries are rushing to build machines that will be smarter than any person. And the world is devastatingly unprepared for what would come next.

For decades, two signatories of that letter—Eliezer Yudkowsky and Nate Soares—have studied how smarter-than-human intelligences will think, behave, and pursue their objectives. Their research says that sufficiently smart AIs will develop goals of their own that put them in conflict with us—and that if it comes to conflict, an artificial superintelligence would crush us. The contest wouldn’t even be close.

How could a machine superintelligence wipe out our entire species? Why would it want to? Would it want anything at all? In this urgent book, Yudkowsky and Soares walk through the theory and the evidence, present one possible extinction scenario, and explain what it would take for humanity to survive.

The world is racing to build something truly new under the sun. And if anyone builds it, everyone dies.

“The best no-nonsense, simple explanation of the AI risk problem I've ever read.”—Yishan Wong, Former CEO of Reddit

Accolades & Awards

Best of 2025
Most Popular
Best of 2025 Computer Science History & Culture Politics & Government Public Policy Science & Technology Technology & Society Scary Artificial Intelligence Suspenseful Inspiring Technology

Critic reviews

“The most important book of the decade. This captivating page-turner, from two of today’s clearest thinkers, reveals that the competition to build smarter-than-human machines isn’t an arms race but a suicide race, fueled by wishful thinking."—Max Tegmark, author of Life 3.0: Being Human in the Age of AI
If Anyone Builds It, Everyone Dies may prove to be the most important book of our time. Yudkowsky and Soares believe we are nowhere near ready to make the transition to superintelligence safely, leaving us on the fast track to extinction. Through the use of parables and crystal-clear explainers, they convey their reasoning, in an urgent plea for us to save ourselves while we still can.”—Tim Urban, cofounder, Wait But Why
“The most important book I’ve read for years: I want to bring it to every political and corporate leader in the world and stand over them until they’ve read it. Yudkowsky and Soares, who have studied AI and its possible trajectories for decades, sound a loud trumpet call to humanity to awaken us as we sleepwalk into disaster.”—Stephen Fry
“The best no-nonsense, simple explanation of the AI risk problem I've ever read.”—Yishan Wong, former CEO of Reddit
“Soares and Yudkowsky lay out, in plain and easy-to-follow terms, why our current path toward ever-more-powerful AIs is extremely dangerous.”—Emmett Shear, former interim CEO of OpenAI
“Everyone should read this book. There’s a 70% chance that you—yes, you reading this right now—will one day grudgingly admit that we all should have listened to Yudkowsky and Soares when we still had the chance."—Daniel Kokotajlo, AI Futures Project
"A compelling introduction to the world's most important topic. Artificial general intelligence could be just a few years away. This is one of the few books that takes the implications seriously, published right as the danger level begins to spike."—Scott Alexander, founder, Astral Codex Ten
“Claims about the risks of AI are often dismissed as advertising, but this book disproves it. Yudkowsky and Soares are not from the AI industry, and have been writing about these risks since before it existed in its present form. Read their disturbing book and tell us what they get wrong.”—Huw Price, Bertrand Russell Professor Emeritus, Trinity College, Cambridge
Clear Explanations • Accessible Concepts • Excellent Narration • Compelling Arguments • Thought-provoking Content

Highly rated for:

All stars
Most relevant
Like having a tiny tiger cub as a pet, eventually it will eat you. How soon?

Terrifying and Probable

Something went wrong. Please try again in a few minutes.

Having read much about the AI alignment problem and AI risks and being an AI practitioner, I approached this book with skepticism. I was wrong. This is a clever and concise presentation of the problem. I won't waste your time further reading this review - drop everything and go read this book, now. It's that important.

A must read for everyone

Something went wrong. Please try again in a few minutes.

I am hesitant to even attempt a review of this book because the consequences seem so dire

If anyone builds it, everyone dies

Something went wrong. Please try again in a few minutes.

This work had excellent information about the workings of LLMs that I had not been aware of previously. It is engaging and does an excellent job explaining and illustrating the principles, arguments, and conclusions the author are seeking to convey. It is also terrifying and confounding, but offers a glimmer of hope. There is much to fear if you take this work as seriously as it should be. However, there is also a chance to hear the warning and take action.

Excellent distillation of the perils of sleepwalking into an AI future!

Something went wrong. Please try again in a few minutes.

This book, although written by a couple of hard core tech and science nerds, makes the case as clear for none techies as someone with their background can likely hope to make it. The message is clear stark and delivered in plain language, and never shy away from either the dangerous situation we are approaching and why we need to change our path if we want to survive. It does not claim that it is in any way easy to avoid extinction brought about by someone building superhuman AGI, but it does explain what a realistic paths towards avoiding such extinction could look like and it gives examples from history of how humanity has successfully avoided catastrophy through cooperation. Every leader of every nation, minister or member of parliament should read this book. Every big tech CEO should read it. Every earthling should read it. If enough people understand this message in time, we still have a chance for new generations of humans and life on earth to survive and flourish into the future.

This book if read and understood and taken with the seriousness it deserves, could turn out to be more important than the Bible, the Quran, the Veda, Principia, Wealth of Nations, Das Kapital combined.

If people get this book it’s the most important book in the history of the known universe.

Something went wrong. Please try again in a few minutes.

See more reviews