12 subscribers
با برنامه Player FM !
پادکست هایی که ارزش شنیدن دارند
حمایت شده


1 Battle Camp: Final 5 Episodes with Dana Moon + Interview with the Winner! 1:03:29
Ep 7 - Responding to a world with AGI - Richard Dazeley (Prof AI & ML, Deakin University)
Manage episode 373068638 series 3428190
In this episode, we speak with Prof Richard Dazeley about the implications of a world with AGI and how we can best respond. We talk about what he thinks AGI will actually look like as well as the technical and governance responses we should put in today and in the future to ensure a safe and positive future with AGI.
Prof Richard Dazeley is the Deputy Head of School at the School of Information Technology at Deakin University in Melbourne, Australia. He’s also a senior member of the International AI Existential Safety Community of the Future of Life Institute. His research at Deakin University focuses on aligning AI systems with human preferences, a field better known as “AI alignment”.
Hosted by Soroush Pour. Follow me for more AGI content:
Twitter: https://twitter.com/soroushjp
LinkedIn: https://www.linkedin.com/in/soroushjp/
== Show links ==
-- About Richard --
* Bio: https://www.deakin.edu.au/about-deakin/people/richard-dazeley
* Twitter: https://twitter.com/Sprocc2
* Google Scholar: https://scholar.google.com.au/citations?user=Tp8Sx6AAAAAJ
* Australian Responsible Autonomous Agents Collective: https://araac.au/
* Machine Intelligence Research Lab at Deakin Uni: https://blogs.deakin.edu.au/mila/
-- Further resources --
* [Book] Life 3.0 by Max Tegmark: https://en.wikipedia.org/wiki/Life_3.0* [Policy paper] FLI - Policymaking in the Pause: https://futureoflife.org/wp-content/uploads/2023/04/FLI_Policymaking_In_The_Pause.pdf* Cyc project: https://en.wikipedia.org/wiki/Cyc* Paperclips game: https://en.wikipedia.org/wiki/Universal_Paperclips* Reward misspecification - See "Week 2" of this free online course: https://course.aisafetyfundamentals.com/alignment
-- Corrections --From Richard, referring to dialogue around ~4min mark:
"it was 1956 not 1957. Minsky didn’t make his comment until 1970. It was H. A. Simon and Allen Newell that said ten years after the Dartmouth conference and that was in 1958."
Related, other key statements & dates from Wikipedia (https://en.wikipedia.org/wiki/History_of_artificial_intelligence):1958, H. A. Simon and Allen Newell: "within ten years a digital computer will be the world's chess champion" and "within ten years a digital computer will discover and prove an important new mathematical theorem."1965, H. A. Simon: "machines will be capable, within twenty years, of doing any work a man can do."1967, Marvin Minsky: "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved."1970, Marvin Minsky "In from three to eight years we will have a machine with the general intelligence of an average human being."
Recorded July 10, 2023
15 قسمت
Ep 7 - Responding to a world with AGI - Richard Dazeley (Prof AI & ML, Deakin University)
Artificial General Intelligence (AGI) Show with Soroush Pour
Manage episode 373068638 series 3428190
In this episode, we speak with Prof Richard Dazeley about the implications of a world with AGI and how we can best respond. We talk about what he thinks AGI will actually look like as well as the technical and governance responses we should put in today and in the future to ensure a safe and positive future with AGI.
Prof Richard Dazeley is the Deputy Head of School at the School of Information Technology at Deakin University in Melbourne, Australia. He’s also a senior member of the International AI Existential Safety Community of the Future of Life Institute. His research at Deakin University focuses on aligning AI systems with human preferences, a field better known as “AI alignment”.
Hosted by Soroush Pour. Follow me for more AGI content:
Twitter: https://twitter.com/soroushjp
LinkedIn: https://www.linkedin.com/in/soroushjp/
== Show links ==
-- About Richard --
* Bio: https://www.deakin.edu.au/about-deakin/people/richard-dazeley
* Twitter: https://twitter.com/Sprocc2
* Google Scholar: https://scholar.google.com.au/citations?user=Tp8Sx6AAAAAJ
* Australian Responsible Autonomous Agents Collective: https://araac.au/
* Machine Intelligence Research Lab at Deakin Uni: https://blogs.deakin.edu.au/mila/
-- Further resources --
* [Book] Life 3.0 by Max Tegmark: https://en.wikipedia.org/wiki/Life_3.0* [Policy paper] FLI - Policymaking in the Pause: https://futureoflife.org/wp-content/uploads/2023/04/FLI_Policymaking_In_The_Pause.pdf* Cyc project: https://en.wikipedia.org/wiki/Cyc* Paperclips game: https://en.wikipedia.org/wiki/Universal_Paperclips* Reward misspecification - See "Week 2" of this free online course: https://course.aisafetyfundamentals.com/alignment
-- Corrections --From Richard, referring to dialogue around ~4min mark:
"it was 1956 not 1957. Minsky didn’t make his comment until 1970. It was H. A. Simon and Allen Newell that said ten years after the Dartmouth conference and that was in 1958."
Related, other key statements & dates from Wikipedia (https://en.wikipedia.org/wiki/History_of_artificial_intelligence):1958, H. A. Simon and Allen Newell: "within ten years a digital computer will be the world's chess champion" and "within ten years a digital computer will discover and prove an important new mathematical theorem."1965, H. A. Simon: "machines will be capable, within twenty years, of doing any work a man can do."1967, Marvin Minsky: "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved."1970, Marvin Minsky "In from three to eight years we will have a machine with the general intelligence of an average human being."
Recorded July 10, 2023
15 قسمت
همه قسمت ها
×
1 Ep 14 - Interp, latent robustness, RLHF limitations w/ Stephen Casper (PhD AI researcher, MIT) 2:42:17

1 Ep 13 - AI researchers expect AGI sooner w/ Katja Grace (Co-founder & Lead Researcher, AI Impacts) 1:20:28

1 Ep 12 - Education & advocacy for AI safety w/ Rob Miles (YouTube host) 1:21:26

1 Ep 11 - Technical alignment overview w/ Thomas Larsen (Director of Strategy, Center for AI Policy) 1:37:19

1 Ep 10 - Accelerated training to become an AI safety researcher w/ Ryan Kidd (Co-Director, MATS) 1:16:58

1 Ep 9 - Scaling AI safety research w/ Adam Gleave (CEO, FAR AI) 1:19:12

1 Ep 8 - Getting started in AI safety & alignment w/ Jamie Bernardi (AI Safety Lead, BlueDot Impact) 1:07:23

1 Ep 7 - Responding to a world with AGI - Richard Dazeley (Prof AI & ML, Deakin University) 1:10:05

1 Ep 6 - Will we see AGI this decade? Our AGI predictions & debate w/ Hunter Jay (CEO, Ripe Robotics) 1:20:58

1 Ep 4 - When will AGI arrive? - Ryan Kupyn (Data Scientist & Forecasting Researcher @ Amazon AWS) 1:03:23

1 Ep 3 - When will AGI arrive? - Jack Kendall (CTO, Rain.AI, maker of neural net chips) 1:01:34

1 Ep 1 - When will AGI arrive? - Logan Riggs Smith (AGI alignment researcher) 1:10:51
به Player FM خوش آمدید!
Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.