با برنامه Player FM !
AF - Survey for alignment researchers: help us build better field-level models by Cameron Berg
بایگانی مجموعه ها ("فیدهای غیر فعال" status)
When? This feed was archived on October 23, 2024 10:10 (). Last successful fetch was on September 19, 2024 11:06 ()
Why? فیدهای غیر فعال status. سرورهای ما، برای یک دوره پایدار، قادر به بازیابی یک فید پادکست معتبر نبوده اند.
What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.
Manage episode 399248165 series 3337166
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Survey for alignment researchers: help us build better field-level models, published by Cameron Berg on February 2, 2024 on The AI Alignment Forum.
AE
Studio is launching a
short, anonymous survey for alignment researchers, in order to develop a stronger model of various field-level dynamics in alignment.
This appears to be an interestingly neglected research direction that we believe will yield specific and actionable insights related to the community's technical views and more general characteristics.
The survey is a straightforward 5-10 minute Google Form with some simple multiple choice questions.
For every alignment researcher who completes the survey, we will donate $40 to a high-impact AI safety organization of your choosing (see specific options on the survey). We will also send each alignment researcher who wants one a customized report that compares their personal results to those of the field.
Together, we hope to not only raise some money for some great AI safety organizations, but also develop a better field-level model of the ideas and people that comprise alignment research.
We will open-source all data and analyses when we publish the results. Thanks in advance for participating and for sharing this around with other alignment researchers!
Survey full link:
https://forms.gle/d2fJhWfierRYvzam8
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
392 قسمت
بایگانی مجموعه ها ("فیدهای غیر فعال" status)
When? This feed was archived on October 23, 2024 10:10 (). Last successful fetch was on September 19, 2024 11:06 ()
Why? فیدهای غیر فعال status. سرورهای ما، برای یک دوره پایدار، قادر به بازیابی یک فید پادکست معتبر نبوده اند.
What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.
Manage episode 399248165 series 3337166
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Survey for alignment researchers: help us build better field-level models, published by Cameron Berg on February 2, 2024 on The AI Alignment Forum.
AE
Studio is launching a
short, anonymous survey for alignment researchers, in order to develop a stronger model of various field-level dynamics in alignment.
This appears to be an interestingly neglected research direction that we believe will yield specific and actionable insights related to the community's technical views and more general characteristics.
The survey is a straightforward 5-10 minute Google Form with some simple multiple choice questions.
For every alignment researcher who completes the survey, we will donate $40 to a high-impact AI safety organization of your choosing (see specific options on the survey). We will also send each alignment researcher who wants one a customized report that compares their personal results to those of the field.
Together, we hope to not only raise some money for some great AI safety organizations, but also develop a better field-level model of the ideas and people that comprise alignment research.
We will open-source all data and analyses when we publish the results. Thanks in advance for participating and for sharing this around with other alignment researchers!
Survey full link:
https://forms.gle/d2fJhWfierRYvzam8
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
392 قسمت
همه قسمت ها
×به Player FM خوش آمدید!
Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.