با برنامه Player FM !
Scaling Up Test-Time Compute with Latent Reasoning with Jonas Geiping - #723
Manage episode 471860280 series 2355587
Today, we're joined by Jonas Geiping, research group leader at Ellis Institute and the Max Planck Institute for Intelligent Systems to discuss his recent paper, “Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach.” This paper proposes a novel language model architecture which uses recurrent depth to enable “thinking in latent space.” We dig into “internal reasoning” versus “verbalized reasoning”—analogous to non-verbalized and verbalized thinking in humans, and discuss how the model searches in latent space to predict the next token and dynamically allocates more compute based on token difficulty. We also explore how the recurrent depth architecture simplifies LLMs, the parallels to diffusion models, the model's performance on reasoning tasks, the challenges of comparing models with varying compute budgets, and architectural advantages such as zero-shot adaptive exits and natural speculative decoding.
The complete show notes for this episode can be found at https://twimlai.com/go/723.
750 قسمت
Scaling Up Test-Time Compute with Latent Reasoning with Jonas Geiping - #723
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Manage episode 471860280 series 2355587
Today, we're joined by Jonas Geiping, research group leader at Ellis Institute and the Max Planck Institute for Intelligent Systems to discuss his recent paper, “Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach.” This paper proposes a novel language model architecture which uses recurrent depth to enable “thinking in latent space.” We dig into “internal reasoning” versus “verbalized reasoning”—analogous to non-verbalized and verbalized thinking in humans, and discuss how the model searches in latent space to predict the next token and dynamically allocates more compute based on token difficulty. We also explore how the recurrent depth architecture simplifies LLMs, the parallels to diffusion models, the model's performance on reasoning tasks, the challenges of comparing models with varying compute budgets, and architectural advantages such as zero-shot adaptive exits and natural speculative decoding.
The complete show notes for this episode can be found at https://twimlai.com/go/723.
750 قسمت
Kaikki jaksot
×

1 From Prompts to Policies: How RL Builds Better AI Agents with Mahesh Sathiamoorthy - #731 1:01:25


1 How OpenAI Builds AI Agents That Think and Act with Josh Tobin - #730 1:07:27






1 Exploring the Biology of LLMs with Circuit Tracing with Emmanuel Ameisen - #727 1:34:06




1 Waymo's Foundation Model for Autonomous Driving with Drago Anguelov - #725 1:09:07






1 Imagine while Reasoning in Space: Multimodal Visualization-of-Thought with Chengzu Li - #722 42:11


1 Inside s1: An o1-Style Reasoning Model That Cost Under $50 to Train with Niklas Muennighoff - #721 49:29


1 Accelerating AI Training and Inference with AWS Trainium2 with Ron Diamant - #720 1:07:05




1 AI Trends 2025: AI Agents and Multi-Agent Systems with Victor Dibia - #718 1:44:59


1 Speculative Decoding and Efficient LLM Inference with Chris Lott - #717 1:16:30
به Player FM خوش آمدید!
Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.