Artwork

محتوای ارائه شده توسط London Futurists. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط London Futurists یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal
Player FM - برنامه پادکست
با برنامه Player FM !

Provably safe AGI, with Steve Omohundro

42:59
 
اشتراک گذاری
 

Manage episode 400683173 series 3390521
محتوای ارائه شده توسط London Futurists. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط London Futurists یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

AI systems have become more powerful in the last few years, and are expected to become even more powerful in the years ahead. The question naturally arises: what, if anything, should humanity be doing to increase the likelihood that these forthcoming powerful systems will be safe, rather than destructive?
Our guest in this episode has a long and distinguished history of analysing that question, and he has some new proposals to share with us. He is Steve Omohundro, the CEO of Beneficial AI Research, an organisation which is working to ensure that artificial intelligence is safe and beneficial for humanity.
Steve has degrees in Physics and Mathematics from Stanford and a Ph.D. in Physics from U.C. Berkeley. He went on to be an award-winning computer science professor at the University of Illinois. At that time, he developed the notion of basic AI drives, which we talk about shortly, as well as a number of potential key AI safety mechanisms.
Among many other roles which are too numerous to mention here, Steve served as a Research Scientist at Meta, the parent company of Facebook, where he worked on generative models and AI-based simulation, and he is an advisor to MIRI, the Machine Intelligence Research Institute.
Selected follow-ups:
Steve Omohundro: Innovative ideas for a better world
Metaculus forecast for the date of weak AGI
"The Basic AI Drives" (PDF, 2008)
TED Talk by Max Tegmark: How to Keep AI Under Control
Apple Secure Enclave
Meta Research: Teaching AI advanced mathematical reasoning
DeepMind AlphaGeometry
Microsoft Lean theorem prover
Terence Tao (Wikipedia)
NeurIPS Tutorial on Machine Learning for Theorem Proving (2023)
The team at MIRI
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

  continue reading

فصل ها

1. Provably safe AGI, with Steve Omohundro (00:00:00)

2. [Ad] Out-of-the-box insights from digital leaders (00:08:56)

3. (Cont.) Provably safe AGI, with Steve Omohundro (00:09:34)

103 قسمت

Artwork
iconاشتراک گذاری
 
Manage episode 400683173 series 3390521
محتوای ارائه شده توسط London Futurists. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط London Futurists یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

AI systems have become more powerful in the last few years, and are expected to become even more powerful in the years ahead. The question naturally arises: what, if anything, should humanity be doing to increase the likelihood that these forthcoming powerful systems will be safe, rather than destructive?
Our guest in this episode has a long and distinguished history of analysing that question, and he has some new proposals to share with us. He is Steve Omohundro, the CEO of Beneficial AI Research, an organisation which is working to ensure that artificial intelligence is safe and beneficial for humanity.
Steve has degrees in Physics and Mathematics from Stanford and a Ph.D. in Physics from U.C. Berkeley. He went on to be an award-winning computer science professor at the University of Illinois. At that time, he developed the notion of basic AI drives, which we talk about shortly, as well as a number of potential key AI safety mechanisms.
Among many other roles which are too numerous to mention here, Steve served as a Research Scientist at Meta, the parent company of Facebook, where he worked on generative models and AI-based simulation, and he is an advisor to MIRI, the Machine Intelligence Research Institute.
Selected follow-ups:
Steve Omohundro: Innovative ideas for a better world
Metaculus forecast for the date of weak AGI
"The Basic AI Drives" (PDF, 2008)
TED Talk by Max Tegmark: How to Keep AI Under Control
Apple Secure Enclave
Meta Research: Teaching AI advanced mathematical reasoning
DeepMind AlphaGeometry
Microsoft Lean theorem prover
Terence Tao (Wikipedia)
NeurIPS Tutorial on Machine Learning for Theorem Proving (2023)
The team at MIRI
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

  continue reading

فصل ها

1. Provably safe AGI, with Steve Omohundro (00:00:00)

2. [Ad] Out-of-the-box insights from digital leaders (00:08:56)

3. (Cont.) Provably safe AGI, with Steve Omohundro (00:09:34)

103 قسمت

Alle episoder

×
 
Loading …

به Player FM خوش آمدید!

Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.

 

راهنمای مرجع سریع