با برنامه Player FM !
Causal Inference & Reinforcement Learning with Andrew Lampinen Ep 13 | CausalBanditsPodcast.com
Manage episode 410037563 series 3526805
Love Causal Bandits Podcast?
Help us bring more quality content: Support the show
Video version of this episode is available here
Causal Inference with LLMs and Reinforcement Learning Agents?
Do LLMs have a world model?
Can they reason causally?
What's the connection between LLMs, reinforcement learning, and causality?
Andrew Lampinen, PhD (Google DeepMind) shares the insights from his research on LLMs, reinforcement learning, causal inference and generalizable agents.
We also discuss the nature of intelligence, rationality and how they play with evolutionary fitness.
Join us in the journey!
Recorded on Dec 1, 2023 in London, UK.
About The Guest
Andrew Lampinen, PhD is a Senior Research Scientist at Google DeepMind. He holds a PhD in PhD in Cognitive Psychology from Stanford University. He's interested in cognitive flexibility and generalization, and how these abilities are enabled by factors like language, memory, and embodiment.
Connect with Andrew:
- Andrew on Twitter/X
- Andrew's web page
About The Host
Aleksander (Alex) Molak is an independent machine learning researcher, educator, entrepreneur and a best-selling author in the area of causality (https://amzn.to/3QhsRz4).
Connect with Alex:
- Alex on the Internet
Links
Papers
- Lampinen et al. (2023) - "Passive learning of active causal strategies in agents and language models" (https://arxiv.org/pdf/2305.16183.pdf)
- Dasgupta, Lampinen, et al. (2022) Language models show human-like content effects on reasoning tasks" (https://arxiv.org/abs/2207.07051)
- Santoro, Lampinen, et al. (2021) - "Symbolic behaviour in artificial intelligence" (https://www.researchgate.net/publication/349125191_Symbolic_Behaviour_in_Artificial_Intelligence)
Rumi.ai
All-in-one meeting tool with real-time transcription & searchable Meeting Memory™
Support the show
Causal Bandits Podcast
Causal AI || Causal Machine Learning || Causal Inference & Discovery
Web: https://causalbanditspodcast.com
Connect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/
Join Causal Python Weekly: https://causalpython.io
The Causal Book: https://amzn.to/3QhsRz4
فصل ها
1. Causal Inference & Reinforcement Learning with Andrew Lampinen Ep 13 | CausalBanditsPodcast.com (00:00:00)
2. [Ad] Rumi.ai (00:22:02)
3. (Cont.) Causal Inference & Reinforcement Learning with Andrew Lampinen Ep 13 | CausalBanditsPodcast.com (00:22:51)
27 قسمت
Manage episode 410037563 series 3526805
Love Causal Bandits Podcast?
Help us bring more quality content: Support the show
Video version of this episode is available here
Causal Inference with LLMs and Reinforcement Learning Agents?
Do LLMs have a world model?
Can they reason causally?
What's the connection between LLMs, reinforcement learning, and causality?
Andrew Lampinen, PhD (Google DeepMind) shares the insights from his research on LLMs, reinforcement learning, causal inference and generalizable agents.
We also discuss the nature of intelligence, rationality and how they play with evolutionary fitness.
Join us in the journey!
Recorded on Dec 1, 2023 in London, UK.
About The Guest
Andrew Lampinen, PhD is a Senior Research Scientist at Google DeepMind. He holds a PhD in PhD in Cognitive Psychology from Stanford University. He's interested in cognitive flexibility and generalization, and how these abilities are enabled by factors like language, memory, and embodiment.
Connect with Andrew:
- Andrew on Twitter/X
- Andrew's web page
About The Host
Aleksander (Alex) Molak is an independent machine learning researcher, educator, entrepreneur and a best-selling author in the area of causality (https://amzn.to/3QhsRz4).
Connect with Alex:
- Alex on the Internet
Links
Papers
- Lampinen et al. (2023) - "Passive learning of active causal strategies in agents and language models" (https://arxiv.org/pdf/2305.16183.pdf)
- Dasgupta, Lampinen, et al. (2022) Language models show human-like content effects on reasoning tasks" (https://arxiv.org/abs/2207.07051)
- Santoro, Lampinen, et al. (2021) - "Symbolic behaviour in artificial intelligence" (https://www.researchgate.net/publication/349125191_Symbolic_Behaviour_in_Artificial_Intelligence)
Rumi.ai
All-in-one meeting tool with real-time transcription & searchable Meeting Memory™
Support the show
Causal Bandits Podcast
Causal AI || Causal Machine Learning || Causal Inference & Discovery
Web: https://causalbanditspodcast.com
Connect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/
Join Causal Python Weekly: https://causalpython.io
The Causal Book: https://amzn.to/3QhsRz4
فصل ها
1. Causal Inference & Reinforcement Learning with Andrew Lampinen Ep 13 | CausalBanditsPodcast.com (00:00:00)
2. [Ad] Rumi.ai (00:22:02)
3. (Cont.) Causal Inference & Reinforcement Learning with Andrew Lampinen Ep 13 | CausalBanditsPodcast.com (00:22:51)
27 قسمت
Minden epizód
×به Player FM خوش آمدید!
Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.