Artwork

محتوای ارائه شده توسط Skeptics in the Pub Online. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Skeptics in the Pub Online یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal
Player FM - برنامه پادکست
با برنامه Player FM !

Too dangerous to publish? Navigating the high-stakes nature of AI research – Rosie Campbell

1:22:15
 
اشتراک گذاری
 

Manage episode 374169622 series 3327627
محتوای ارائه شده توسط Skeptics in the Pub Online. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Skeptics in the Pub Online یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

As AI becomes increasingly advanced, it promises many benefits but also comes with risks. How can we mitigate these risks while preserving scientific inquiry and openness? Who is responsible for anticipating the impacts of AI research, and how can they do so effectively? What changes, if any, need to be made to the peer review process? In this talk, we’ll explore these tensions and how they are playing out right now in the AI community. AI is not the first high-stakes, ‘dual-use’ field to face these questions. Taking inspiration from fields like cybersecurity and biosecurity, we’ll look at possible approaches to responsible publication, their strengths and limitations, and how they might be used in practice for AI.

Rosie Campbell leads the Safety-Critical AI program at the Partnership on AI, a multistakeholder nonprofit shaping the future of responsible AI. Her main focus is on responsible publication and deployment practices for increasingly advanced AI. Previously, she was Assistant Director of the Center for Human-Compatible AI at UC Berkeley, a Research Engineer at BBC R&D, and cofounder of Manchester Futurists. Her academic background spans physics, philosophy, and computer science. Rosie is also a productivity nerd and enjoys thinking about how to optimize systems, and how to use reason and evidence to improve the world.

The music used in this episode is by Thula Borah and is used with permission.

  continue reading

92 قسمت

Artwork
iconاشتراک گذاری
 
Manage episode 374169622 series 3327627
محتوای ارائه شده توسط Skeptics in the Pub Online. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Skeptics in the Pub Online یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

As AI becomes increasingly advanced, it promises many benefits but also comes with risks. How can we mitigate these risks while preserving scientific inquiry and openness? Who is responsible for anticipating the impacts of AI research, and how can they do so effectively? What changes, if any, need to be made to the peer review process? In this talk, we’ll explore these tensions and how they are playing out right now in the AI community. AI is not the first high-stakes, ‘dual-use’ field to face these questions. Taking inspiration from fields like cybersecurity and biosecurity, we’ll look at possible approaches to responsible publication, their strengths and limitations, and how they might be used in practice for AI.

Rosie Campbell leads the Safety-Critical AI program at the Partnership on AI, a multistakeholder nonprofit shaping the future of responsible AI. Her main focus is on responsible publication and deployment practices for increasingly advanced AI. Previously, she was Assistant Director of the Center for Human-Compatible AI at UC Berkeley, a Research Engineer at BBC R&D, and cofounder of Manchester Futurists. Her academic background spans physics, philosophy, and computer science. Rosie is also a productivity nerd and enjoys thinking about how to optimize systems, and how to use reason and evidence to improve the world.

The music used in this episode is by Thula Borah and is used with permission.

  continue reading

92 قسمت

همه قسمت ها

×
 
Loading …

به Player FM خوش آمدید!

Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.

 

راهنمای مرجع سریع

در حین کاوش به این نمایش گوش دهید
پخش