The director’s commentary track for Daring Fireball. Long digressions on Apple, technology, design, movies, and more.
…
continue reading
محتوای ارائه شده توسط LessWrong. تمام محتوای پادکست شامل قسمتها، گرافیکها و توضیحات پادکست مستقیماً توسط LessWrong یا شریک پلتفرم پادکست آنها آپلود و ارائه میشوند. اگر فکر میکنید شخصی بدون اجازه شما از اثر دارای حق نسخهبرداری شما استفاده میکند، میتوانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal
Player FM - برنامه پادکست
با برنامه Player FM !
با برنامه Player FM !
“The Problem” by Rob Bensinger, tanagrabeast, yams, So8res, Eliezer Yudkowsky, Gretta Duleba
Manage episode 498682088 series 3364760
محتوای ارائه شده توسط LessWrong. تمام محتوای پادکست شامل قسمتها، گرافیکها و توضیحات پادکست مستقیماً توسط LessWrong یا شریک پلتفرم پادکست آنها آپلود و ارائه میشوند. اگر فکر میکنید شخصی بدون اجازه شما از اثر دارای حق نسخهبرداری شما استفاده میکند، میتوانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal
This is a new introduction to AI as an extinction threat, previously posted to the MIRI website in February alongside a summary. It was written independently of Eliezer and Nate's forthcoming book, If Anyone Builds It, Everyone Dies, and isn't a sneak peak of the book. Since the book is long and costs money, we expect this to be a valuable resource in its own right even after the book comes out next month.[1]
The stated goal of the world's leading AI companies is to build AI that is general enough to do anything a human can do, from solving hard problems in theoretical physics to deftly navigating social environments. Recent machine learning progress seems to have brought this goal within reach. At this point, we would be uncomfortable ruling out the possibility that AI more capable than any human is achieved in the next year or two, and [...]
---
Outline:
(02:27) 1. There isn't a ceiling at human-level capabilities.
(08:56) 2. ASI is very likely to exhibit goal-oriented behavior.
(15:12) 3. ASI is very likely to pursue the wrong goals.
(32:40) 4. It would be lethally dangerous to build ASIs that have the wrong goals.
(46:03) 5. Catastrophe can be averted via a sufficiently aggressive policy response.
The original text contained 1 footnote which was omitted from this narration.
---
First published:
August 5th, 2025
Source:
https://www.lesswrong.com/posts/kgb58RL88YChkkBNf/the-problem
---
Narrated by TYPE III AUDIO.
…
continue reading
The stated goal of the world's leading AI companies is to build AI that is general enough to do anything a human can do, from solving hard problems in theoretical physics to deftly navigating social environments. Recent machine learning progress seems to have brought this goal within reach. At this point, we would be uncomfortable ruling out the possibility that AI more capable than any human is achieved in the next year or two, and [...]
---
Outline:
(02:27) 1. There isn't a ceiling at human-level capabilities.
(08:56) 2. ASI is very likely to exhibit goal-oriented behavior.
(15:12) 3. ASI is very likely to pursue the wrong goals.
(32:40) 4. It would be lethally dangerous to build ASIs that have the wrong goals.
(46:03) 5. Catastrophe can be averted via a sufficiently aggressive policy response.
The original text contained 1 footnote which was omitted from this narration.
---
First published:
August 5th, 2025
Source:
https://www.lesswrong.com/posts/kgb58RL88YChkkBNf/the-problem
---
Narrated by TYPE III AUDIO.
625 قسمت
Manage episode 498682088 series 3364760
محتوای ارائه شده توسط LessWrong. تمام محتوای پادکست شامل قسمتها، گرافیکها و توضیحات پادکست مستقیماً توسط LessWrong یا شریک پلتفرم پادکست آنها آپلود و ارائه میشوند. اگر فکر میکنید شخصی بدون اجازه شما از اثر دارای حق نسخهبرداری شما استفاده میکند، میتوانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal
This is a new introduction to AI as an extinction threat, previously posted to the MIRI website in February alongside a summary. It was written independently of Eliezer and Nate's forthcoming book, If Anyone Builds It, Everyone Dies, and isn't a sneak peak of the book. Since the book is long and costs money, we expect this to be a valuable resource in its own right even after the book comes out next month.[1]
The stated goal of the world's leading AI companies is to build AI that is general enough to do anything a human can do, from solving hard problems in theoretical physics to deftly navigating social environments. Recent machine learning progress seems to have brought this goal within reach. At this point, we would be uncomfortable ruling out the possibility that AI more capable than any human is achieved in the next year or two, and [...]
---
Outline:
(02:27) 1. There isn't a ceiling at human-level capabilities.
(08:56) 2. ASI is very likely to exhibit goal-oriented behavior.
(15:12) 3. ASI is very likely to pursue the wrong goals.
(32:40) 4. It would be lethally dangerous to build ASIs that have the wrong goals.
(46:03) 5. Catastrophe can be averted via a sufficiently aggressive policy response.
The original text contained 1 footnote which was omitted from this narration.
---
First published:
August 5th, 2025
Source:
https://www.lesswrong.com/posts/kgb58RL88YChkkBNf/the-problem
---
Narrated by TYPE III AUDIO.
…
continue reading
The stated goal of the world's leading AI companies is to build AI that is general enough to do anything a human can do, from solving hard problems in theoretical physics to deftly navigating social environments. Recent machine learning progress seems to have brought this goal within reach. At this point, we would be uncomfortable ruling out the possibility that AI more capable than any human is achieved in the next year or two, and [...]
---
Outline:
(02:27) 1. There isn't a ceiling at human-level capabilities.
(08:56) 2. ASI is very likely to exhibit goal-oriented behavior.
(15:12) 3. ASI is very likely to pursue the wrong goals.
(32:40) 4. It would be lethally dangerous to build ASIs that have the wrong goals.
(46:03) 5. Catastrophe can be averted via a sufficiently aggressive policy response.
The original text contained 1 footnote which was omitted from this narration.
---
First published:
August 5th, 2025
Source:
https://www.lesswrong.com/posts/kgb58RL88YChkkBNf/the-problem
---
Narrated by TYPE III AUDIO.
625 قسمت
همه قسمت ها
×به Player FM خوش آمدید!
Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.