P(doom)

Published on July 1, 2025 at 10:47 AM

Consider this table of P(doom) values from the FoAK. There's a pretty wide range from 0% to 99.9%. But I want to argue that people like Marc Andreessen, Yann LeCun, Richard Sutton and the esteemable Mr Booch are certainly wrong in estimating that P(doom) is ~0%. FMs are already very good at science. So, if you were a postdoc or academic, an FM would be like having a bright PhD student, who doesn't have much taste, but knows a lot and has read everything. That's pretty helpful to a researcher.

The easiest cause of doom is a bioweapon, a weaponised virus or bacterium. Now, of course, FMs have guardrails built into them so that if you ask a system to help you build a bioweapon, it won't. But the point is that the guardrails have to built in. They are not their in the raw model. Training requires a lot of compute, but inference a lot less. So, someone could steal the weights of a raw model from a frontier lab and run the unguardrailed model on commodity hardware. Of course, if you work at a government bioweapons lab, you are going to want your own unguardrailed system. So, systems that are good at helping design bioweapons are going to be out there somewhere and that somewhere is going to include rogue states of various, and - who knows? - bad actors of other kinds (amoksters, criminal gangs, terrorist groups). There are hobbyist biohackers and you can buy a complete home genetic engineering kit for $1664. So, we don't have to believe in a The Machines Hate Us! scenario. We just have to believe that it is going to easier to cook up something nasty in the years and decades to come.

This doesn't mean there aren't necessarily lots of other ways the machines might kill us or that the machine themselves might kill us with bioweapons, it's just that putting powerful unguardrailed systems in the hands of unreliable people (and unreliable institutions). Hence, why I have a very P(doom) of 80-99%. And this leaves aside the absolute impossibility of creating provably aligned A(G|S)I. Because, although it is easier, to look inside the mind of an A(G|S)I, and you can try and interpret and control what it is thinking and doing, you are never going to be absolutely certain.And systems that can think more broadly, deeply and quickly than you are going to an unpredictable factor thrown into our volatile geopolitical mix. I am starting to sound Yudkowsky now. The next thing you know I will be calling for drone strikes on data centres. We just have to hope that the defensive systems can keep up with the offensive ones.

Name P(doom) Notes
Elon Musk c. 10–30%[8] Businessman and CEO of XTesla, and SpaceX
Lex Fridman 10%[9] American computer scientist and host of Lex Fridman Podcast
Marc Andreessen 0%[10] American businessman
Geoffrey Hinton 10-20% (all-things-considered); >50% (independent impression) [11] "Godfather of AI" and 2024 Nobel Prize laureate in Physics
Demis Hassabis >0%[12] Co-founder and CEO of Google DeepMind and Isomorphic Labs and 2024 Nobel Prize laureate in Chemistry
Lina Khan c. 15%[6] Former chair of the Federal Trade Commission
Dario Amodei c. 10–25%[6][13] CEO of Anthropic
Vitalik Buterin c. 10%[1][14] Cofounder of Ethereum
Yann LeCun <0.01%[15][Note 1] Chief AI Scientist at Meta
Eliezer Yudkowsky >95%[1] Founder of the Machine Intelligence Research Institute
Nate Silver 5–10%[16] Statistician, founder of FiveThirtyEight
Yoshua Bengio 50%[3][Note 2] Computer scientist and scientific director of the Montreal Institute for Learning Algorithms
Daniel Kokotajlo 70–80%[17] AI researcher and founder of AI Futures Project, formerly of OpenAI
Max Tegmark >90%[18] Swedish-American physicist, machine learning researcher, and author, best known for theorising the mathematical universe hypothesis and co-founding the Future of Life Institute.
Holden Karnofsky 50%[19] Executive Director of Open Philanthropy
Emmett Shear 5–50%[6] Co-founder of Twitch and former interim CEO of OpenAI
Shane Legg c. 5–50%[20] Co-founder and Chief AGI Scientist of Google DeepMind
Emad Mostaque 50%[21] Co-founder of Stability AI
Zvi Mowshowitz 60%[22] Writer on artificial intelligence, former competitive Magic: The Gathering player
Jan Leike 10–90%[1] AI alignment researcher at Anthropic, formerly of DeepMind and OpenAI
Casey Newton 5%[1] American technology journalist
Roman Yampolskiy 99.9%[23][Note 3] Latvian computer scientist
Grady Booch c. 0%[1][Note 4] American software engineer
Dan Hendrycks >80%[1][Note 5] Director of Center for AI Safety
Toby Ord 10%[24] Australian philosopher and author of The Precipice
Connor Leahy 90%+[25] German-American AI researcher; cofounder of EleutherAI.
Paul Christiano 50%[26] Head of research at the US AI Safety Institute
Richard Sutton 0%[27] Canadian computer scientist and 2025 Turing Award laureate
Andrew Critch 85%[28] Founder of the Center for Applied Rationality
David Duvenaud 85%[29] Former Anthropic Safety Team Lead
Eli Lifland c. 35–40%[30] Top competitive superforecaster, co-author of AI 2027.
Paul Crowley >80%[31] Computer scientist at Anthropic

Add comment

Comments

There are no comments yet.