Artwork

محتوای ارائه شده توسط Tech Policy Design Centre. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Tech Policy Design Centre یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal
Player FM - برنامه پادکست
با برنامه Player FM !

Beyond the Pause: Australia’s AI Opportunity – Part 1

51:40
 
اشتراک گذاری
 

Manage episode 380134208 series 3293847
محتوای ارائه شده توسط Tech Policy Design Centre. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Tech Policy Design Centre یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

Ever wish you could sit down with a real-deal AI technologist and ask them what’s on earth is going on? In this double-episode of Tech Mirror, Johanna chats with Bill Simpson Young and Tiberio Cataneo, CEO and Chief Scientist at Gradient Institute. This independent, non-profit research institute works to build safety, ethics, accountability and transparency into AI systems.

In Part One of this wide-ranging conversation, the trio:

- define key terms like Artificial Intelligence, Machine Learning, Large Language Models, Frontier vs Foundation AI, Narrow vs General AI

- chat about Bill’s biggest bugbear

- talk about why Bill and Tiberio both signed the Pause Letter and

- discuss if it is even possible to regulate Artificial intelligence (spoiler alert: it is)

- and consider how liability could be used to incentivise improved AI safety.

In Part Two, they discuss:

- the benefits and perils of open-source AI models

- the possibility of securing an international agreement on AI safety, US and China dynamics and the opportunity for Australian leadership

- the practical work that Gradient is doing to facilitate the technical implementation of ethical AI frameworks to address AI harms today.

Links

- Gradient Institute: https://www.gradientinstitute.org/

- Australian Government Paper and call for submission on Responsible Artificial Intelligence: https://consult.industry.gov.au/supporting-responsible-ai

- Gradient’s Submission on Responsible AI: https://www.gradientinstitute.org/posts/disr-safe-responsible-ai-submission/

- The Pause Letter: https://futureoflife.org/open-letter/pause-giant-ai-experiments/

- UK AI Safety Summit: https://www.gov.uk/government/publications/ai-safety-summit-introduction

- The Future of Life Institute’s recommendations for the UK AI Safety Summit: https://futureoflife.org/project/uk-ai-safety-summit/

- Gradient and National AI Centre: Implementing Australia’s AI Ethics Principles: https://www.gradientinstitute.org/posts/csiro-gradient-new-report/

- The Coming Wave: Technology, Power and the Twenty-first Century’s greatest dilemma: https://www.goodreads.com/en/book/show/90590134

See omnystudio.com/listener for privacy information.

  continue reading

41 قسمت

Artwork
iconاشتراک گذاری
 
Manage episode 380134208 series 3293847
محتوای ارائه شده توسط Tech Policy Design Centre. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Tech Policy Design Centre یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

Ever wish you could sit down with a real-deal AI technologist and ask them what’s on earth is going on? In this double-episode of Tech Mirror, Johanna chats with Bill Simpson Young and Tiberio Cataneo, CEO and Chief Scientist at Gradient Institute. This independent, non-profit research institute works to build safety, ethics, accountability and transparency into AI systems.

In Part One of this wide-ranging conversation, the trio:

- define key terms like Artificial Intelligence, Machine Learning, Large Language Models, Frontier vs Foundation AI, Narrow vs General AI

- chat about Bill’s biggest bugbear

- talk about why Bill and Tiberio both signed the Pause Letter and

- discuss if it is even possible to regulate Artificial intelligence (spoiler alert: it is)

- and consider how liability could be used to incentivise improved AI safety.

In Part Two, they discuss:

- the benefits and perils of open-source AI models

- the possibility of securing an international agreement on AI safety, US and China dynamics and the opportunity for Australian leadership

- the practical work that Gradient is doing to facilitate the technical implementation of ethical AI frameworks to address AI harms today.

Links

- Gradient Institute: https://www.gradientinstitute.org/

- Australian Government Paper and call for submission on Responsible Artificial Intelligence: https://consult.industry.gov.au/supporting-responsible-ai

- Gradient’s Submission on Responsible AI: https://www.gradientinstitute.org/posts/disr-safe-responsible-ai-submission/

- The Pause Letter: https://futureoflife.org/open-letter/pause-giant-ai-experiments/

- UK AI Safety Summit: https://www.gov.uk/government/publications/ai-safety-summit-introduction

- The Future of Life Institute’s recommendations for the UK AI Safety Summit: https://futureoflife.org/project/uk-ai-safety-summit/

- Gradient and National AI Centre: Implementing Australia’s AI Ethics Principles: https://www.gradientinstitute.org/posts/csiro-gradient-new-report/

- The Coming Wave: Technology, Power and the Twenty-first Century’s greatest dilemma: https://www.goodreads.com/en/book/show/90590134

See omnystudio.com/listener for privacy information.

  continue reading

41 قسمت

Усі епізоди

×
 
Loading …

به Player FM خوش آمدید!

Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.

 

راهنمای مرجع سریع