Artwork

محتوای ارائه شده توسط The Nonlinear Fund. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط The Nonlinear Fund یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal
Player FM - برنامه پادکست
با برنامه Player FM !

LW - AI takeoff and nuclear war by owencb

18:59
 
اشتراک گذاری
 

Manage episode 423242713 series 3337129
محتوای ارائه شده توسط The Nonlinear Fund. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط The Nonlinear Fund یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI takeoff and nuclear war, published by owencb on June 11, 2024 on LessWrong. Summary As we approach and pass through an AI takeoff period, the risk of nuclear war (or other all-out global conflict) will increase. An AI takeoff would involve the automation of scientific and technological research. This would lead to much faster technological progress, including military technologies. In such a rapidly changing world, some of the circumstances which underpin the current peaceful equilibrium will dissolve or change. There are then two risks[1]: 1. Fundamental instability. New circumstances could give a situation where there is no peaceful equilibrium it is in everyone's interests to maintain. e.g. If nuclear calculus changes to make second strike capabilities infeasible If one party is racing ahead with technological progress and will soon trivially outmatch the rest of the world, without any way to credibly commit not to completely disempower them after it has done so 2. Failure to navigate. Despite the existence of new peaceful equilibria, decision-makers might fail to reach one. e.g. If decision-makers misunderstand the strategic position, they may hold out for a more favourable outcome they (incorrectly) believe is fair If the only peaceful equilibria are convoluted and unprecedented, leaders may not be able to identify or build trust in them in a timely fashion Individual leaders might choose a path of war that would be good for them personally as they solidify power with AI; or nations might hold strongly to values like sovereignty that could make cooperation much harder Of these two risks, it is likely simpler to work to reduce the risk of failure to navigate. The three straightforward strategies here are research & dissemination, to ensure that the basic strategic situation is common knowledge among decision-makers, spreading positive-sum frames, and crafting and getting buy-in to meaningful commitments about sharing the power from AI, to reduce incentives for anyone to initiate war. Additionally, powerful AI tools could change the landscape in ways that reduce either or both of these risks. A fourth strategy, therefore, is to differentially accelerate risk-reducing applications of AI. These could include: Tools to help decision-makers make sense of the changing world and make wise choices; Tools to facilitate otherwise impossible agreements via mutually trusted artificial judges; Tools for better democratic accountability. Why do(n't) people go to war? To date, the world has been pretty good at avoiding thermonuclear war. The doctrine of mutually assured destruction means that it's in nobody's interest to start a war (although the short timescales involved mean that accidentally starting one is a concern). The rapid development of powerful AI could disrupt the current equilibrium. From a very outside-view perspective, we might think that this is equally likely to result in, say, a 10x decrease in risk as a 10x increase. Even this would be alarming, since the annual probability seems fairly low right now, so a big decrease in risk is merely nice-to-have, but a big increase could be catastrophic. To get more clarity than that, we'll look at the theoretical reasons people might go to war, and then look at how an AI takeoff period might impact each of these. Rational reasons to go to war War is inefficient; for any war, there should be some possible world which doesn't have that war in which everyone is better off. So why do we have war? Fearon's classic paper on Rationalist Explanations for War explains that there are essentially three mechanisms that can lead to war between states that are all acting rationally: 1. Commitment problems If you're about to build a superweapon, I might want to attack now. We might both be better off if I didn't attack, and I paid y...
  continue reading

1690 قسمت

Artwork
iconاشتراک گذاری
 
Manage episode 423242713 series 3337129
محتوای ارائه شده توسط The Nonlinear Fund. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط The Nonlinear Fund یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI takeoff and nuclear war, published by owencb on June 11, 2024 on LessWrong. Summary As we approach and pass through an AI takeoff period, the risk of nuclear war (or other all-out global conflict) will increase. An AI takeoff would involve the automation of scientific and technological research. This would lead to much faster technological progress, including military technologies. In such a rapidly changing world, some of the circumstances which underpin the current peaceful equilibrium will dissolve or change. There are then two risks[1]: 1. Fundamental instability. New circumstances could give a situation where there is no peaceful equilibrium it is in everyone's interests to maintain. e.g. If nuclear calculus changes to make second strike capabilities infeasible If one party is racing ahead with technological progress and will soon trivially outmatch the rest of the world, without any way to credibly commit not to completely disempower them after it has done so 2. Failure to navigate. Despite the existence of new peaceful equilibria, decision-makers might fail to reach one. e.g. If decision-makers misunderstand the strategic position, they may hold out for a more favourable outcome they (incorrectly) believe is fair If the only peaceful equilibria are convoluted and unprecedented, leaders may not be able to identify or build trust in them in a timely fashion Individual leaders might choose a path of war that would be good for them personally as they solidify power with AI; or nations might hold strongly to values like sovereignty that could make cooperation much harder Of these two risks, it is likely simpler to work to reduce the risk of failure to navigate. The three straightforward strategies here are research & dissemination, to ensure that the basic strategic situation is common knowledge among decision-makers, spreading positive-sum frames, and crafting and getting buy-in to meaningful commitments about sharing the power from AI, to reduce incentives for anyone to initiate war. Additionally, powerful AI tools could change the landscape in ways that reduce either or both of these risks. A fourth strategy, therefore, is to differentially accelerate risk-reducing applications of AI. These could include: Tools to help decision-makers make sense of the changing world and make wise choices; Tools to facilitate otherwise impossible agreements via mutually trusted artificial judges; Tools for better democratic accountability. Why do(n't) people go to war? To date, the world has been pretty good at avoiding thermonuclear war. The doctrine of mutually assured destruction means that it's in nobody's interest to start a war (although the short timescales involved mean that accidentally starting one is a concern). The rapid development of powerful AI could disrupt the current equilibrium. From a very outside-view perspective, we might think that this is equally likely to result in, say, a 10x decrease in risk as a 10x increase. Even this would be alarming, since the annual probability seems fairly low right now, so a big decrease in risk is merely nice-to-have, but a big increase could be catastrophic. To get more clarity than that, we'll look at the theoretical reasons people might go to war, and then look at how an AI takeoff period might impact each of these. Rational reasons to go to war War is inefficient; for any war, there should be some possible world which doesn't have that war in which everyone is better off. So why do we have war? Fearon's classic paper on Rationalist Explanations for War explains that there are essentially three mechanisms that can lead to war between states that are all acting rationally: 1. Commitment problems If you're about to build a superweapon, I might want to attack now. We might both be better off if I didn't attack, and I paid y...
  continue reading

1690 قسمت

Усі епізоди

×
 
Loading …

به Player FM خوش آمدید!

Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.

 

راهنمای مرجع سریع