Artwork

محتوای ارائه شده توسط John Willis. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط John Willis یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal
Player FM - برنامه پادکست
با برنامه Player FM !

S4 E21 - Erik J. Larson - The Myth of AI and Unravelling The Hype

1:04:25
 
اشتراک گذاری
 

Manage episode 440556139 series 3568163
محتوای ارائه شده توسط John Willis. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط John Willis یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

In this episode of the Profound Podcast, I speak with Erik J. Larson, author of The Myth of Artificial Intelligence, about the speculative nature and real limitations of AI, particularly in relation to achieving Artificial General Intelligence (AGI). Larson delves into the philosophical and scientific misunderstandings surrounding AI, challenging the dominant narrative that AGI is just around the corner. Drawing from his expertise and experience in the field, Larson explains why much of the AI hype lacks empirical foundation. He emphasizes the limits of current AI models, particularly their reliance on inductive reasoning, which, though powerful, is insufficient for achieving human-like intelligence.

Larson discusses how the field of AI has historically blended speculative futurism with genuine technological advancements, often fueled by financial incentives rather than scientific rigor. He highlights how this approach has led to misconceptions about AI’s capabilities, especially in the context of AGI. Drawing connections to philosophical theories of inference, Larson introduces deductive, inductive, and abductive reasoning, explaining how current AI systems fall short in their over-reliance on inductive methods. The conversation touches on the challenges of abduction (the "broken" form of reasoning humans often use) and the difficulty of replicating this in AI systems.

Throughout the discussion, we explore the social and ethical implications of AI, including concerns about data limitations, the dangers of synthetic data, and the looming “data wall” that could hinder future AI progress. We also touch on broader societal impacts, such as how AI’s potential misuse and over-reliance might affect innovation and human intelligence.

  continue reading

71 قسمت

Artwork
iconاشتراک گذاری
 
Manage episode 440556139 series 3568163
محتوای ارائه شده توسط John Willis. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط John Willis یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

In this episode of the Profound Podcast, I speak with Erik J. Larson, author of The Myth of Artificial Intelligence, about the speculative nature and real limitations of AI, particularly in relation to achieving Artificial General Intelligence (AGI). Larson delves into the philosophical and scientific misunderstandings surrounding AI, challenging the dominant narrative that AGI is just around the corner. Drawing from his expertise and experience in the field, Larson explains why much of the AI hype lacks empirical foundation. He emphasizes the limits of current AI models, particularly their reliance on inductive reasoning, which, though powerful, is insufficient for achieving human-like intelligence.

Larson discusses how the field of AI has historically blended speculative futurism with genuine technological advancements, often fueled by financial incentives rather than scientific rigor. He highlights how this approach has led to misconceptions about AI’s capabilities, especially in the context of AGI. Drawing connections to philosophical theories of inference, Larson introduces deductive, inductive, and abductive reasoning, explaining how current AI systems fall short in their over-reliance on inductive methods. The conversation touches on the challenges of abduction (the "broken" form of reasoning humans often use) and the difficulty of replicating this in AI systems.

Throughout the discussion, we explore the social and ethical implications of AI, including concerns about data limitations, the dangers of synthetic data, and the looming “data wall” that could hinder future AI progress. We also touch on broader societal impacts, such as how AI’s potential misuse and over-reliance might affect innovation and human intelligence.

  continue reading

71 قسمت

همه قسمت ها

×
 
Loading …

به Player FM خوش آمدید!

Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.

 

راهنمای مرجع سریع