Artwork

محتوای ارائه شده توسط Machine Learning Street Talk (MLST). تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Machine Learning Street Talk (MLST) یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal
Player FM - برنامه پادکست
با برنامه Player FM !

Nora Belrose - AI Development, Safety, and Meaning

2:29:50
 
اشتراک گذاری
 

Manage episode 450673952 series 2803422
محتوای ارائه شده توسط Machine Learning Street Talk (MLST). تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Machine Learning Street Talk (MLST) یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

Nora Belrose, Head of Interpretability Research at EleutherAI, discusses critical challenges in AI safety and development. The conversation begins with her technical work on concept erasure in neural networks through LEACE (LEAst-squares Concept Erasure), while highlighting how neural networks' progression from simple to complex learning patterns could have important implications for AI safety.

Many fear that advanced AI will pose an existential threat -- pursuing its own dangerous goals once it's powerful enough. But Belrose challenges this popular doomsday scenario with a fascinating breakdown of why it doesn't add up.

Belrose also provides a detailed critique of current AI alignment approaches, particularly examining "counting arguments" and their limitations when applied to AI safety. She argues that the Principle of Indifference may be insufficient for addressing existential risks from advanced AI systems. The discussion explores how emergent properties in complex AI systems could lead to unpredictable and potentially dangerous behaviors that simple reductionist approaches fail to capture.

The conversation concludes by exploring broader philosophical territory, where Belrose discusses her growing interest in Buddhism's potential relevance to a post-automation future. She connects concepts of moral anti-realism with Buddhist ideas about emptiness and non-attachment, suggesting these frameworks might help humans find meaning in a world where AI handles most practical tasks. Rather than viewing this automated future with alarm, she proposes that Zen Buddhism's emphasis on spontaneity and presence might complement a society freed from traditional labor.

SPONSOR MESSAGES:

CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.

https://centml.ai/pricing/

Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on ARC and AGI, they just acquired MindsAI - the current winners of the ARC challenge. Are you interested in working on ARC, or getting involved in their events? Goto https://tufalabs.ai/

Nora Belrose:

https://norabelrose.com/

https://scholar.google.com/citations?user=p_oBc64AAAAJ&hl=en

https://x.com/norabelrose

SHOWNOTES:

https://www.dropbox.com/scl/fi/38fhsv2zh8gnubtjaoq4a/NORA_FINAL.pdf?rlkey=0e5r8rd261821g1em4dgv0k70&st=t5c9ckfb&dl=0

TOC:

1. Neural Network Foundations

[00:00:00] 1.1 Philosophical Foundations and Neural Network Simplicity Bias

[00:02:20] 1.2 LEACE and Concept Erasure Fundamentals

[00:13:16] 1.3 LISA Technical Implementation and Applications

[00:18:50] 1.4 Practical Implementation Challenges and Data Requirements

[00:22:13] 1.5 Performance Impact and Limitations of Concept Erasure

2. Machine Learning Theory

[00:32:23] 2.1 Neural Network Learning Progression and Simplicity Bias

[00:37:10] 2.2 Optimal Transport Theory and Image Statistics Manipulation

[00:43:05] 2.3 Grokking Phenomena and Training Dynamics

[00:44:50] 2.4 Texture vs Shape Bias in Computer Vision Models

[00:45:15] 2.5 CNN Architecture and Shape Recognition Limitations

3. AI Systems and Value Learning

[00:47:10] 3.1 Meaning, Value, and Consciousness in AI Systems

[00:53:06] 3.2 Global Connectivity vs Local Culture Preservation

[00:58:18] 3.3 AI Capabilities and Future Development Trajectory

4. Consciousness Theory

[01:03:03] 4.1 4E Cognition and Extended Mind Theory

[01:09:40] 4.2 Thompson's Views on Consciousness and Simulation

[01:12:46] 4.3 Phenomenology and Consciousness Theory

[01:15:43] 4.4 Critique of Illusionism and Embodied Experience

[01:23:16] 4.5 AI Alignment and Counting Arguments Debate

(TRUNCATED, TOC embedded in MP3 file with more information)

  continue reading

193 قسمت

Artwork
iconاشتراک گذاری
 
Manage episode 450673952 series 2803422
محتوای ارائه شده توسط Machine Learning Street Talk (MLST). تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Machine Learning Street Talk (MLST) یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

Nora Belrose, Head of Interpretability Research at EleutherAI, discusses critical challenges in AI safety and development. The conversation begins with her technical work on concept erasure in neural networks through LEACE (LEAst-squares Concept Erasure), while highlighting how neural networks' progression from simple to complex learning patterns could have important implications for AI safety.

Many fear that advanced AI will pose an existential threat -- pursuing its own dangerous goals once it's powerful enough. But Belrose challenges this popular doomsday scenario with a fascinating breakdown of why it doesn't add up.

Belrose also provides a detailed critique of current AI alignment approaches, particularly examining "counting arguments" and their limitations when applied to AI safety. She argues that the Principle of Indifference may be insufficient for addressing existential risks from advanced AI systems. The discussion explores how emergent properties in complex AI systems could lead to unpredictable and potentially dangerous behaviors that simple reductionist approaches fail to capture.

The conversation concludes by exploring broader philosophical territory, where Belrose discusses her growing interest in Buddhism's potential relevance to a post-automation future. She connects concepts of moral anti-realism with Buddhist ideas about emptiness and non-attachment, suggesting these frameworks might help humans find meaning in a world where AI handles most practical tasks. Rather than viewing this automated future with alarm, she proposes that Zen Buddhism's emphasis on spontaneity and presence might complement a society freed from traditional labor.

SPONSOR MESSAGES:

CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.

https://centml.ai/pricing/

Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on ARC and AGI, they just acquired MindsAI - the current winners of the ARC challenge. Are you interested in working on ARC, or getting involved in their events? Goto https://tufalabs.ai/

Nora Belrose:

https://norabelrose.com/

https://scholar.google.com/citations?user=p_oBc64AAAAJ&hl=en

https://x.com/norabelrose

SHOWNOTES:

https://www.dropbox.com/scl/fi/38fhsv2zh8gnubtjaoq4a/NORA_FINAL.pdf?rlkey=0e5r8rd261821g1em4dgv0k70&st=t5c9ckfb&dl=0

TOC:

1. Neural Network Foundations

[00:00:00] 1.1 Philosophical Foundations and Neural Network Simplicity Bias

[00:02:20] 1.2 LEACE and Concept Erasure Fundamentals

[00:13:16] 1.3 LISA Technical Implementation and Applications

[00:18:50] 1.4 Practical Implementation Challenges and Data Requirements

[00:22:13] 1.5 Performance Impact and Limitations of Concept Erasure

2. Machine Learning Theory

[00:32:23] 2.1 Neural Network Learning Progression and Simplicity Bias

[00:37:10] 2.2 Optimal Transport Theory and Image Statistics Manipulation

[00:43:05] 2.3 Grokking Phenomena and Training Dynamics

[00:44:50] 2.4 Texture vs Shape Bias in Computer Vision Models

[00:45:15] 2.5 CNN Architecture and Shape Recognition Limitations

3. AI Systems and Value Learning

[00:47:10] 3.1 Meaning, Value, and Consciousness in AI Systems

[00:53:06] 3.2 Global Connectivity vs Local Culture Preservation

[00:58:18] 3.3 AI Capabilities and Future Development Trajectory

4. Consciousness Theory

[01:03:03] 4.1 4E Cognition and Extended Mind Theory

[01:09:40] 4.2 Thompson's Views on Consciousness and Simulation

[01:12:46] 4.3 Phenomenology and Consciousness Theory

[01:15:43] 4.4 Critique of Illusionism and Embodied Experience

[01:23:16] 4.5 AI Alignment and Counting Arguments Debate

(TRUNCATED, TOC embedded in MP3 file with more information)

  continue reading

193 قسمت

همه قسمت ها

×
 
Loading …

به Player FM خوش آمدید!

Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.

 

راهنمای مرجع سریع