Skip to main content
podcast

For the best listening experience and to never miss an episode, subscribe to Machines Like Us on Apple Podcasts and Spotify.


Just a few years ago, it seemed like all anyone in AI wanted to talk about was existential risk – this idea that an artificial super intelligence could eventually break containment and destroy humanity. More than 30,000 experts signed an open letter demanding a pause on AI development; bills were drafted that would constrain the most powerful new models; and the “godfathers” of AI were travelling around the world, warning anyone who would listen that we were hurtling toward our extinction.

And then: we moved on. We started using AI for work, and school, and to plan our kids’ birthday parties. Collectively, we just stopped talking about the end of the world.

But Nate Soares didn’t move on. Last year, the artificial intelligence researcher wrote a book with Eliezer Yudkowsky called If Anyone Builds It, Everyone Dies. As you can probably tell from the title, the book is unequivocal: If we keep going down the path we’re on, it will almost certainly lead to the end of our species.

Now, not everyone is convinced of the arguments Soares makes. But if there’s even a chance he’s right, I think we need to hear him out.

Mentioned

If Anyone Builds It, Everyone Dies, by Eliezer Yudkowsky and Nate Soares

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe