The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading ...
…
continue reading
1
Suzy Shepherd on Imagining Superintelligence and "Writing Doom"
1:03:08
1:03:08
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
1:03:08
Suzy Shepherd joins the podcast to discuss her new short film "Writing Doom", which deals with AI risk. We discuss how to use humor in film, how to write concisely, how filmmaking is evolving, in what ways AI is useful for filmmakers, and how we will find meaning in an increasingly automated world. Here's Writing Doom: https://www.youtube.com/watch…
…
continue reading
1
Andrea Miotti on a Narrow Path to Safe, Transformative AI
1:28:09
1:28:09
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
1:28:09
Andrea Miotti joins the podcast to discuss "A Narrow Path" — a roadmap to safe, transformative AI. We talk about our current inability to precisely predict future AI capabilities, the dangers of self-improving and unbounded AI systems, how humanity might coordinate globally to ensure safe AI development, and what a mature science of intelligence wo…
…
continue reading
1
Tamay Besiroglu on AI in 2030: Scaling, Automation, and AI Agents
1:30:29
1:30:29
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
1:30:29
Tamay Besiroglu joins the podcast to discuss scaling, AI capabilities in 2030, breakthroughs in AI agents and planning, automating work, the uncertainties of investing in AI, and scaling laws for inference-time compute. Here's the report we discuss in the episode: https://epochai.org/blog/can-ai-scaling-continue-through-2030 Timestamps: 00:00 How i…
…
continue reading
1
Ryan Greenblatt on AI Control, Timelines, and Slowing Down Around Human-Level AI
2:08:44
2:08:44
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
2:08:44
Ryan Greenblatt joins the podcast to discuss AI control, timelines, takeoff speeds, misalignment, and slowing down around human-level AI. You can learn more about Ryan's work here: https://www.redwoodresearch.org/team/ryan-greenblatt Timestamps: 00:00 AI control 09:35 Challenges to AI control 23:48 AI control as a bridge to alignment 26:54 Policy a…
…
continue reading
1
Tom Barnes on How to Build a Resilient World
1:19:41
1:19:41
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
1:19:41
Tom Barnes joins the podcast to discuss how much the world spends on AI capabilities versus AI safety, how governments can prepare for advanced AI, and how to build a more resilient world. Tom's report on advanced AI: https://www.founderspledge.com/research/research-and-recommendations-advanced-artificial-intelligence Timestamps: 00:00 Spending on …
…
continue reading
1
Samuel Hammond on why AI Progress is Accelerating - and how Governments Should Respond
2:16:11
2:16:11
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
2:16:11
Samuel Hammond joins the podcast to discuss whether AI progress is slowing down or speeding up, AI agents and reasoning, why superintelligence is an ideological goal, open source AI, how technical change leads to regime change, the economics of advanced AI, and much more. Our conversation often references this essay by Samuel: https://www.secondbes…
…
continue reading
1
Anousheh Ansari on Innovation Prizes for Space, AI, Quantum Computing, and Carbon Removal
1:03:10
1:03:10
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
1:03:10
Anousheh Ansari joins the podcast to discuss how innovation prizes can incentivize technical innovation in space, AI, quantum computing, and carbon removal. We discuss the pros and cons of such prizes, where they work best, and how far they can scale. Learn more about Anousheh's work here: https://www.xprize.org/home Timestamps: 00:00 Innovation pr…
…
continue reading
Mary Robinson joins the podcast to discuss long-view leadership, risks from AI and nuclear weapons, prioritizing global problems, how to overcome barriers to international cooperation, and advice to future leaders. Learn more about Robinson's work as Chair of The Elders at https://theelders.org Timestamps: 00:00 Mary's journey to presidency 05:11 L…
…
continue reading
1
Emilia Javorsky on how AI Concentrates Power
1:03:35
1:03:35
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
1:03:35
Emilia Javorsky joins the podcast to discuss AI-driven power concentration and how we might mitigate it. We also discuss optimism, utopia, and cultural experimentation. Apply for our RFP here: https://futureoflife.org/grant-program/mitigate-ai-driven-power-concentration/ Timestamps: 00:00 Power concentration 07:43 RFP: Mitigating AI-driven power co…
…
continue reading
1
Anton Korinek on Automating Work and the Economics of an Intelligence Explosion
1:32:24
1:32:24
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
1:32:24
Anton Korinek joins the podcast to discuss the effects of automation on wages and labor, how we measure the complexity of tasks, the economics of an intelligence explosion, and the market structure of the AI industry. Learn more about Anton's work at https://www.korinek.com Timestamps: 00:00 Automation and wages 14:32 Complexity for people and mach…
…
continue reading
1
Christian Ruhl on Preventing World War III, US-China Hotlines, and Ultraviolet Germicidal Light
1:36:01
1:36:01
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
1:36:01
Christian Ruhl joins the podcast to discuss US-China competition and the risk of war, official versus unofficial diplomacy, hotlines between countries, catastrophic biological risks, ultraviolet germicidal light, and ancient civilizational collapse. Find out more about Christian's work at https://www.founderspledge.com Timestamps: 00:00 US-China co…
…
continue reading
Christian Nunes joins the podcast to discuss deepfakes, how they impact women in particular, how we can protect ordinary victims of deepfakes, and the current landscape of deepfake legislation. You can learn more about Christian's work at https://now.org and about the Ban Deepfakes campaign at https://bandeepfakes.org Timestamps: 00:00 The National…
…
continue reading
1
Dan Faggella on the Race to AGI
1:45:20
1:45:20
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
1:45:20
Dan Faggella joins the podcast to discuss whether humanity should eventually create AGI, how AI will change power dynamics between institutions, what drives AI progress, and which industries are implementing AI successfully. Find out more about Dan at https://danfaggella.comTimestamps:00:00 Value differences in AI 12:07 Should we eventually create …
…
continue reading
1
Liron Shapira on Superintelligence Goals
1:26:30
1:26:30
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
1:26:30
Liron Shapira joins the podcast to discuss superintelligence goals, what makes AI different from other technologies, risks from centralizing power, and whether AI can defend us from AI. Timestamps:00:00 Intelligence as optimization-power05:18 Will LLMs imitate human values? 07:15 Why would AI develop dangerous goals? 09:55 Goal-completeness 12:53 A…
…
continue reading
1
Annie Jacobsen on Nuclear War - a Second by Second Timeline
1:26:28
1:26:28
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
1:26:28
Annie Jacobsen joins the podcast to lay out a second by second timeline for how nuclear war could happen. We also discuss time pressure, submarines, interceptor missiles, cyberattacks, and concentration of power. You can find more on Annie's work at https://anniejacobsen.com Timestamps:00:00 A scenario of nuclear war06:56 Who would launch an attack…
…
continue reading
1
Katja Grace on the Largest Survey of AI Researchers
1:08:00
1:08:00
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
1:08:00
Katja Grace joins the podcast to discuss the largest survey of AI researchers conducted to date, AI researchers' beliefs about different AI risks, capabilities required for continued AI-related transformation, the idea of discontinuous progress, the impacts of AI from either side of the human-level intelligence threshold, intelligence and power, an…
…
continue reading
1
Holly Elmore on Pausing AI, Hardware Overhang, Safety Research, and Protesting
1:36:05
1:36:05
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
1:36:05
Holly Elmore joins the podcast to discuss pausing frontier AI, hardware overhang, safety research during a pause, the social dynamics of AI risk, and what prevents AGI corporations from collaborating. You can read more about Holly's work at https://pauseai.info Timestamps: 00:00 Pausing AI10:23 Risks during an AI pause19:41 Hardware overhang29:04 T…
…
continue reading
Sneha Revanur joins the podcast to discuss the social effects of AI, the illusory divide between AI ethics and AI safety, the importance of humans in the loop, the different effects of AI on younger and older people, and the importance of AIs identifying as AIs. You can read more about Sneha's work at https://encodejustice.org Timestamps: 00:00 Enc…
…
continue reading
1
Roman Yampolskiy on Shoggoth, Scaling Laws, and Evidence for AI being Uncontrollable
1:31:13
1:31:13
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
1:31:13
Roman Yampolskiy joins the podcast again to discuss whether AI is like a Shoggoth, whether scaling laws will hold for more agent-like AIs, evidence that AI is uncontrollable, and whether designing human-like AI would be safer than the current development path. You can read more about Roman's work at http://cecs.louisville.edu/ry/ Timestamps: 00:00 …
…
continue reading
On this special episode of the podcast, Flo Crivello talks with Nathan Labenz about AI as a new form of life, whether attempts to regulate AI risks regulatory capture, how a GPU kill switch could work, and why Flo expects AGI in 2-8 years. Timestamps: 00:00 Technological progress 07:59 Regulatory capture and AI11:53 AI as a new form of life15:44 Ca…
…
continue reading
1
Carl Robichaud on Preventing Nuclear War
1:39:03
1:39:03
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
1:39:03
Carl Robichaud joins the podcast to discuss the new nuclear arms race, how much world leaders and ideologies matter for nuclear risk, and how to reach a stable, low-risk era. You can learn more about Carl's work here: https://www.longview.org/about/carl-robichaud/Timestamps:00:00 A new nuclear arms race08:07 How much do world leaders matter? 18:04 …
…
continue reading
1
Frank Sauer on Autonomous Weapon Systems
1:42:41
1:42:41
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
1:42:41
Frank Sauer joins the podcast to discuss autonomy in weapon systems, killer drones, low-tech defenses against drones, the flaws and unpredictability of autonomous weapon systems, and the political possibilities of regulating such systems. You can learn more about Frank's work here: https://metis.unibw.de/en/ Timestamps:00:00 Autonomy in weapon syst…
…
continue reading
1
Darren McKee on Uncontrollable Superintelligence
1:40:37
1:40:37
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
1:40:37
Darren McKee joins the podcast to discuss how AI might be difficult to control, which goals and traits AI systems will develop, and whether there's a unified solution to AI alignment. Timestamps:00:00 Uncontrollable superintelligence16:41 AI goals and the "virus analogy" 28:36 Speed of AI cognition39:25 Narrow AI and autonomy 52:23 Reliability of c…
…
continue reading
1
Mark Brakel on the UK AI Summit and the Future of AI Policy
1:48:36
1:48:36
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
1:48:36
Mark Brakel (Director of Policy at the Future of Life Institute) joins the podcast to discuss the AI Safety Summit in Bletchley Park, objections to AI policy, AI regulation in the EU and US, global institutions for safe AI, and autonomy in weapon systems. Timestamps:00:00 AI Safety Summit in the UK 12:18 Are officials up to date on AI? 23:22 Object…
…
continue reading
1
Dan Hendrycks on Catastrophic AI Risks
2:07:24
2:07:24
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
2:07:24
Dan Hendrycks joins the podcast again to discuss X.ai, how AI risk thinking has evolved, malicious use of AI, AI race dynamics between companies and between militaries, making AI organizations safer, and how representation engineering could help us understand AI traits like deception. You can learn more about Dan's work at https://www.safe.ai Times…
…
continue reading
1
Samuel Hammond on AGI and Institutional Disruption
2:14:51
2:14:51
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
2:14:51
Samuel Hammond joins the podcast to discuss how AGI will transform economies, governments, institutions, and other power structures. You can read Samuel's blog at https://www.secondbest.ca Timestamps: 00:00 Is AGI close? 06:56 Compute versus data09:59 Information theory 20:36 Universality of learning 24:53 Hards steps in evolution 30:30 Governments…
…
continue reading
Are we doomed to a future of loneliness and unfulfilling online interactions? What if technology made us feel more connected instead?Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined future…
…
continue reading
Let’s imagine a future where AGI is developed but kept at a distance from practically impacting the world, while narrow AI remakes the world completely. Most people don’t know or care about the difference and have no idea how they could distinguish between a human or artificial stranger. Inequality sticks around and AI fractures society into separa…
…
continue reading
1
Steve Omohundro on Provably Safe AGI
2:02:32
2:02:32
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
2:02:32
Steve Omohundro joins the podcast to discuss Provably Safe Systems, a paper he co-authored with FLI President Max Tegmark. You can read the paper here: https://arxiv.org/pdf/2309.01933.pdf Timestamps:00:00 Provably safe AI systems 12:17 Alignment and evaluations21:08 Proofs about language model behavior27:11 Can we formalize safety? 30:29 Provable …
…
continue reading
1
Imagine A World: What if AI enabled us to communicate with animals?
1:04:07
1:04:07
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
1:04:07
What if AI allowed us to communicate with animals? Could interspecies communication lead to new levels of empathy? How might communicating with animals lead humans to reimagine our place in the natural world?Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. W…
…
continue reading
If you could extend your life, would you? How might life extension technologies create new social and political divides? How can the world unite to solve the great problems of our time, like AI risk? What if AI creators could agree on an inspection process to expose AI dangers before they're unleashed? Imagine a World is a podcast exploring a range…
…
continue reading
1
Johannes Ackva on Managing Climate Change
1:40:13
1:40:13
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
1:40:13
Johannes Ackva joins the podcast to discuss the main drivers of climate change and our best technological and governmental options for managing it. You can read more about Johannes' work at http://founderspledge.com/climateTimestamps:00:00 Johannes's journey as an environmentalist 13:21 The drivers of climate change23:00 Oil, coal, and gas 38:05 So…
…
continue reading
How do low income countries affected by climate change imagine their futures? How do they overcome these twin challenges? Will all nations eventually choose or be forced to go digital?Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators…
…
continue reading
1
Imagine A World: What if global challenges led to more centralization?
1:00:29
1:00:29
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
1:00:29
What if we had one advanced AI system for the entire world? Would this led to a world 'beyond' nation states - and do we want this?Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures …
…
continue reading
1
Tom Davidson on How Quickly AI Could Automate the Economy
1:56:22
1:56:22
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
1:56:22
Tom Davidson joins the podcast to discuss how AI could quickly automate most cognitive tasks, including AI research, and why this would be risky. Timestamps:00:00 The current pace of AI 03:58 Near-term risks from AI 09:34 Historical analogies to AI 13:58 AI benchmarks VS economic impact 18:30 AI takeoff speed and bottlenecks31:09 Tom's model of AI …
…
continue reading
How does who is involved in the design of AI affect the possibilities for our future? Why isn’t the design of AI inclusive already? Can technology solve all our problems? Can human nature change? Do we want either of these things to happen?Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by…
…
continue reading
1
Imagine A World: What if new governance mechanisms helped us coordinate?
1:02:35
1:02:35
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
1:02:35
Are today's democratic systems equipped well enough to create the best possible future for everyone? If they're not, what systems might work better? And are governments around the world taking the destabilizing threats of new technologies seriously enough, or will it take a dramatic event, such as an AI-driven war, to get their act together? Imagin…
…
continue reading
Coming Soon…The year is 2045. Humanity is not extinct, nor living in a dystopia. It has averted climate disaster and major wars. Instead, AI and other new technologies are helping to make the world more peaceful, happy and equal. How? This was what we asked the entrants of our Worldbuilding Contest to imagine last year.Our new podcast series digs d…
…
continue reading
1
Robert Trager on International AI Governance and Cybersecurity at AI Companies
1:44:17
1:44:17
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
1:44:17
Robert Trager joins the podcast to discuss AI governance, the incentives of governments and companies, the track record of international regulation, the security dilemma in AI, cybersecurity at AI companies, and skepticism about AI governance. We also discuss Robert's forthcoming paper International Governance of Civilian AI: A Jurisdictional Certi…
…
continue reading
1
Jason Crawford on Progress and Risks from AI
1:25:43
1:25:43
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
1:25:43
Jason Crawford joins the podcast to discuss the history of progress, the future of economic growth, and the relationship between progress and risks from AI. You can read more about Jason's work at https://rootsofprogress.org Timestamps:00:00 Eras of human progress 06:47 Flywheels of progress 17:56 Main causes of progress 21:01 Progress and risk 32:…
…
continue reading
1
Special: Jaan Tallinn on Pausing Giant AI Experiments
1:41:08
1:41:08
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
1:41:08
On this special episode of the podcast, Jaan Tallinn talks with Nathan Labenz about Jaan's model of AI risk, the future of AI development, and pausing giant AI experiments.Timestamps: 0:00 Nathan introduces Jaan4:22 AI safety and Future of Life Institute5:55 Jaan's first meeting with Eliezer Yudkowsky12:04 Future of AI evolution14:58 Jaan's investm…
…
continue reading
1
Joe Carlsmith on How We Change Our Minds About AI Risk
2:24:23
2:24:23
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
2:24:23
Joe Carlsmith joins the podcast to discuss how we change our minds about AI risk, gut feelings versus abstract models, and what to do if transformative AI is coming soon. You can read more about Joe's work at https://joecarlsmith.com. Timestamps: 00:00 Predictable updating on AI risk 07:27 Abstract models versus gut feelings22:06 How Joe began beli…
…
continue reading
1
Dan Hendrycks on Why Evolution Favors AIs over Humans
2:26:38
2:26:38
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
2:26:38
Dan Hendrycks joins the podcast to discuss evolutionary dynamics in AI development and how we could develop AI safely. You can read more about Dan's work at https://www.safe.ai Timestamps: 00:00 Corporate AI race 06:28 Evolutionary dynamics in AI 25:26 Why evolution applies to AI 50:58 Deceptive AI 1:06:04 Competition erodes safety 10:17:40 Evoluti…
…
continue reading
1
Roman Yampolskiy on Objections to AI Safety
1:42:14
1:42:14
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
1:42:14
Roman Yampolskiy joins the podcast to discuss various objections to AI safety, impossibility results for AI, and how much risk civilization should accept from emerging technologies. You can read more about Roman's work at http://cecs.louisville.edu/ry/ Timestamps: 00:00 Objections to AI safety 15:06 Will robots make AI risks salient? 27:51 Was earl…
…
continue reading
1
Nathan Labenz on How AI Will Transform the Economy
1:06:54
1:06:54
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
1:06:54
Nathan Labenz joins the podcast to discuss the economic effects of AI on growth, productivity, and employment. We also talk about whether AI might have catastrophic effects on the world. You can read more about Nathan's work at https://www.cognitiverevolution.ai Timestamps: 00:00 Economic transformation from AI 11:15 Productivity increases from tec…
…
continue reading
Nathan Labenz joins the podcast to discuss the cognitive revolution, his experience red teaming GPT-4, and the potential near-term dangers of AI. You can read more about Nathan's work at https://www.cognitiverevolution.ai Timestamps: 00:00 The cognitive revolution 07:47 Red teaming GPT-4 24:00 Coming to believe in transformative AI 30:14 Is AI dept…
…
continue reading
1
Maryanna Saenko on Venture Capital, Philanthropy, and Ethical Technology
1:17:46
1:17:46
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
1:17:46
Maryanna Saenko joins the podcast to discuss how venture capital works, how to fund innovation, and what the fields of investing and philanthropy could learn from each other. You can read more about Maryanna's work at https://future.ventures Timestamps: 00:00 How does venture capital work? 09:01 Failure and success for startups 13:22 Is overconfide…
…
continue reading
Connor Leahy joins the podcast to discuss the state of the AI. Which labs are in front? Which alignment solutions might work? How will the public react to more capable AI? You can read more about Connor's work at https://conjecture.devTimestamps: 00:00 Landscape of AI research labs 10:13 Is AGI a useful term? 13:31 AI predictions 17:56 Reinforcemen…
…
continue reading
1
Connor Leahy on AGI and Cognitive Emulation
1:36:34
1:36:34
در پخش در آینده
در پخش در آینده
لیست ها
پسندیدن
دوست داشته شد
1:36:34
Connor Leahy joins the podcast to discuss GPT-4, magic, cognitive emulation, demand for human-like AI, and aligning superintelligence. You can read more about Connor's work at https://conjecture.devTimestamps: 00:00 GPT-4 16:35 "Magic" in machine learning 27:43 Cognitive emulations 38:00 Machine learning VS explainability 48:00 Human data = human A…
…
continue reading
Lennart Heim joins the podcast to discuss options for governing the compute used by AI labs and potential problems with this approach to AI safety. You can read more about Lennart's work here: https://heim.xyz/about/Timestamps: 00:00 Introduction 00:37 AI risk 03:33 Why focus on compute? 11:27 Monitoring compute 20:30 Restricting compute 26:54 Subs…
…
continue reading