Artwork

محتوای ارائه شده توسط CSPI. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط CSPI یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal
Player FM - برنامه پادکست
با برنامه Player FM !

Waiting for the Betterness Explosion | Robin Hanson & Richard Hanania

1:42:06
 
اشتراک گذاری
 

Manage episode 357787426 series 2853093
محتوای ارائه شده توسط CSPI. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط CSPI یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

Robin Hanson joins the podcast to talk about the AI debate. He explains his reasons for being skeptical about “foom,” or the idea that there will emerge a sudden superintelligence that will be able to improve itself quickly and potentially destroy humanity in the service of its goals. Among his arguments are:

* We should start with a very low prior about something like this happening, given the history of the world. We already have “superintelligences” in the form of firms, for example, and they only improve slowly and incrementally

* There are different levels of abstraction with regards to intelligence and knowledge. A machine that can reason very fast may not have the specific knowledge necessary to know how to do important things.

* We may be erring in thinking of intelligence as a general quality, rather than as more domain-specific.

Hanania presents various arguments made by AI doomers, and Hanson responds to each in kind, eventually giving a less than 1% chance that something like the scenario imagined by Eliezer Yudkowsky and others will come to pass.

He also discusses why he thinks it is a waste of time to worry about the control problem before we know what any supposed superintelligence will even look like. The conversation includes a discussion about why so many smart people seem drawn to AI doomerism, and why you shouldn’t worry all that much about the principal-agent problem in this area.

Listen in podcast form or watch on YouTube. You can also read a transcript of the conversation here.

Links:

* The Hanson-Yudkowsky AI-Foom Debate

* Previous Hanson appearance on CSPI podcast, audio and transcript

* Eric Drexler, Engines of Creation

* Eric Drexler, Nanosystems

* Robin Hanson, “Explain the Sacred”

* Robin Hanson, “We See the Sacred from Afar, to See It the Same.”

* Articles by Robin on AI alignment:

* “Prefer Law to Values” (October 10, 2009)

* “The Betterness Explosion” (June 21, 2011)

* “Foom Debate, Again” (February 8, 2013)

* “How Lumpy AI Services?” (February 14, 2019)

* “Agency Failure AI Apocalypse?” (April 10, 2019)

* “Foom Update” (May 6, 2022)

* “Why Not Wait?” (June 30, 2022)

Get full access to Center for the Study of Partisanship and Ideology at www.cspicenter.com/subscribe

  continue reading

68 قسمت

Artwork
iconاشتراک گذاری
 
Manage episode 357787426 series 2853093
محتوای ارائه شده توسط CSPI. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط CSPI یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

Robin Hanson joins the podcast to talk about the AI debate. He explains his reasons for being skeptical about “foom,” or the idea that there will emerge a sudden superintelligence that will be able to improve itself quickly and potentially destroy humanity in the service of its goals. Among his arguments are:

* We should start with a very low prior about something like this happening, given the history of the world. We already have “superintelligences” in the form of firms, for example, and they only improve slowly and incrementally

* There are different levels of abstraction with regards to intelligence and knowledge. A machine that can reason very fast may not have the specific knowledge necessary to know how to do important things.

* We may be erring in thinking of intelligence as a general quality, rather than as more domain-specific.

Hanania presents various arguments made by AI doomers, and Hanson responds to each in kind, eventually giving a less than 1% chance that something like the scenario imagined by Eliezer Yudkowsky and others will come to pass.

He also discusses why he thinks it is a waste of time to worry about the control problem before we know what any supposed superintelligence will even look like. The conversation includes a discussion about why so many smart people seem drawn to AI doomerism, and why you shouldn’t worry all that much about the principal-agent problem in this area.

Listen in podcast form or watch on YouTube. You can also read a transcript of the conversation here.

Links:

* The Hanson-Yudkowsky AI-Foom Debate

* Previous Hanson appearance on CSPI podcast, audio and transcript

* Eric Drexler, Engines of Creation

* Eric Drexler, Nanosystems

* Robin Hanson, “Explain the Sacred”

* Robin Hanson, “We See the Sacred from Afar, to See It the Same.”

* Articles by Robin on AI alignment:

* “Prefer Law to Values” (October 10, 2009)

* “The Betterness Explosion” (June 21, 2011)

* “Foom Debate, Again” (February 8, 2013)

* “How Lumpy AI Services?” (February 14, 2019)

* “Agency Failure AI Apocalypse?” (April 10, 2019)

* “Foom Update” (May 6, 2022)

* “Why Not Wait?” (June 30, 2022)

Get full access to Center for the Study of Partisanship and Ideology at www.cspicenter.com/subscribe

  continue reading

68 قسمت

همه قسمت ها

×
 
Loading …

به Player FM خوش آمدید!

Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.

 

راهنمای مرجع سریع