با برنامه Player FM !
#76 (Bonus) - Is P(doom) meaningful? Debating epistemology (w/ Liron Shapira)
Manage episode 449135407 series 3418237
Liron Shapira, host of [Doom Debates], invited us on to discuss Popperian versus Bayesian epistemology and whether we're worried about AI doom. As one might expect knowing us, we only got about halfway through the first subject, so get yourselves ready (presumably with many drinks) for part II in a few weeks! The era of Ben and Vaden's rowdy youtube debates has begun. Vaden is jubilant, Ben is uncomfortable, and the world has never been more annoyed by Popperians.
Follow Liron on twitter (@liron) and check out the Doom Debates youtube channel and podcast.
We discuss
- Whether we're concerned about AI doom
- Bayesian reasoning versus Popperian reasoning
- Whether it makes sense to put numbers on all your beliefs
- Solomonoff induction
- Objective vs subjective Bayesianism
- Prediction markets and superforecasting
References
- Vaden's blog post on Cox's Theorem and Yudkowsky's claims of "Laws of Rationality": https://vmasrani.github.io/blog/2021/the_credence_assumption/
- Disproof of probabilistic induction (including Solomonov Induction): https://arxiv.org/abs/2107.00749
- EA Post Vaden Mentioned regarding predictions being uncalibrated more than 1yr out: https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations
- Article by Gavin Leech and Misha Yagudin on the reliability of forecasters: https://ifp.org/can-policymakers-trust-forecasters/
- Superforecaster p(doom) is ~1%: https://80000hours.org/2024/09/why-experts-and-forecasters-disagree-about-ai-risk/#:~:text=Domain%20experts%20in%20AI%20estimated,by%202100%20(around%2090%25).
- The existential risk persuasion tournament https://www.astralcodexten.com/p/the-extinction-tournament
- Some more info in Ben's article on superforecasting: https://benchugg.com/writing/superforecasting/
- Slides on Content vs Probability: https://vmasrani.github.io/assets/pdf/popper_good.pdf
Socials
- Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @liron
- Come join our discord server! DM us on twitter or send us an email to get a supersecret link
- Trust in the reverend Bayes and get exclusive bonus content by becoming a patreon subscriber here. Or give us one-time cash donations to help cover our lack of cash donations here.
- Click dem like buttons on youtube
What's your credence that the second debate is as fun as the first? Tell us at incrementspodcast@gmail.com
Special Guest: Liron Shapira.
77 قسمت
Manage episode 449135407 series 3418237
Liron Shapira, host of [Doom Debates], invited us on to discuss Popperian versus Bayesian epistemology and whether we're worried about AI doom. As one might expect knowing us, we only got about halfway through the first subject, so get yourselves ready (presumably with many drinks) for part II in a few weeks! The era of Ben and Vaden's rowdy youtube debates has begun. Vaden is jubilant, Ben is uncomfortable, and the world has never been more annoyed by Popperians.
Follow Liron on twitter (@liron) and check out the Doom Debates youtube channel and podcast.
We discuss
- Whether we're concerned about AI doom
- Bayesian reasoning versus Popperian reasoning
- Whether it makes sense to put numbers on all your beliefs
- Solomonoff induction
- Objective vs subjective Bayesianism
- Prediction markets and superforecasting
References
- Vaden's blog post on Cox's Theorem and Yudkowsky's claims of "Laws of Rationality": https://vmasrani.github.io/blog/2021/the_credence_assumption/
- Disproof of probabilistic induction (including Solomonov Induction): https://arxiv.org/abs/2107.00749
- EA Post Vaden Mentioned regarding predictions being uncalibrated more than 1yr out: https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations
- Article by Gavin Leech and Misha Yagudin on the reliability of forecasters: https://ifp.org/can-policymakers-trust-forecasters/
- Superforecaster p(doom) is ~1%: https://80000hours.org/2024/09/why-experts-and-forecasters-disagree-about-ai-risk/#:~:text=Domain%20experts%20in%20AI%20estimated,by%202100%20(around%2090%25).
- The existential risk persuasion tournament https://www.astralcodexten.com/p/the-extinction-tournament
- Some more info in Ben's article on superforecasting: https://benchugg.com/writing/superforecasting/
- Slides on Content vs Probability: https://vmasrani.github.io/assets/pdf/popper_good.pdf
Socials
- Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @liron
- Come join our discord server! DM us on twitter or send us an email to get a supersecret link
- Trust in the reverend Bayes and get exclusive bonus content by becoming a patreon subscriber here. Or give us one-time cash donations to help cover our lack of cash donations here.
- Click dem like buttons on youtube
What's your credence that the second debate is as fun as the first? Tell us at incrementspodcast@gmail.com
Special Guest: Liron Shapira.
77 قسمت
همه قسمت ها
×به Player FM خوش آمدید!
Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.