Artwork

محتوای ارائه شده توسط John Danaher. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط John Danaher یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal
Player FM - برنامه پادکست
با برنامه Player FM !

#61 – Yampolskiy on Machine Consciousness and AI Welfare

 
اشتراک گذاری
 

Manage episode 236565460 series 1328245
محتوای ارائه شده توسط John Danaher. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط John Danaher یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

Roman Yampolskiy

In this episode I talk to Roman Yampolskiy. Roman is a Tenured Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books and papers on AI security and ethics, including Artificial Superintelligence: a Futuristic Approach. We talk about how you might test for machine consciousness and the first steps towards a science of AI welfare.

You can listen below or download here. You can also subscribe to the podcast on Apple, Stitcher and a variety of other podcasting services (the RSS feed is here).

Show Notes

  • 0:00 – Introduction
  • 2:30 – Artificial minds versus Artificial Intelligence
  • 6:35 – Why talk about machine consciousness now when it seems far-fetched?
  • 8:55 – What is phenomenal consciousness?
  • 11:04 – Illusions as an insight into phenomenal consciousness
  • 18:22 – How to create an illusion-based test for machine consciousness
  • 23:58 – Challenges with operationalising the test
  • 31:42 – Does AI already have a minimal form of consciousness?
  • 34:08 – Objections to the proposed test and next steps
  • 37:12 – Towards a science of AI welfare
  • 40:30 – How do we currently test for animal and human welfare
  • 44:10 – Dealing with the problem of deception
  • 47:00 – How could we test for welfare in AI?
  • 52:39 – If an AI can suffer, do we have a duty not to create it?
  • 56:48 – Do people take these ideas seriously in computer science?
  • 58:08 – What next?

Relevant Links

  continue reading

64 قسمت

Artwork
iconاشتراک گذاری
 
Manage episode 236565460 series 1328245
محتوای ارائه شده توسط John Danaher. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط John Danaher یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

Roman Yampolskiy

In this episode I talk to Roman Yampolskiy. Roman is a Tenured Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books and papers on AI security and ethics, including Artificial Superintelligence: a Futuristic Approach. We talk about how you might test for machine consciousness and the first steps towards a science of AI welfare.

You can listen below or download here. You can also subscribe to the podcast on Apple, Stitcher and a variety of other podcasting services (the RSS feed is here).

Show Notes

  • 0:00 – Introduction
  • 2:30 – Artificial minds versus Artificial Intelligence
  • 6:35 – Why talk about machine consciousness now when it seems far-fetched?
  • 8:55 – What is phenomenal consciousness?
  • 11:04 – Illusions as an insight into phenomenal consciousness
  • 18:22 – How to create an illusion-based test for machine consciousness
  • 23:58 – Challenges with operationalising the test
  • 31:42 – Does AI already have a minimal form of consciousness?
  • 34:08 – Objections to the proposed test and next steps
  • 37:12 – Towards a science of AI welfare
  • 40:30 – How do we currently test for animal and human welfare
  • 44:10 – Dealing with the problem of deception
  • 47:00 – How could we test for welfare in AI?
  • 52:39 – If an AI can suffer, do we have a duty not to create it?
  • 56:48 – Do people take these ideas seriously in computer science?
  • 58:08 – What next?

Relevant Links

  continue reading

64 قسمت

همه قسمت ها

×
 
Loading …

به Player FM خوش آمدید!

Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.

 

راهنمای مرجع سریع