Looks like the publisher may have taken this series offline or changed its URL. Please contact support if you believe it should be working, the feed URL is invalid, or you have any other concerns about it.
با برنامه Player FM !
پادکست هایی که ارزش شنیدن دارند
حمایت شده


AF - An issue with training schemers with supervised fine-tuning by Fabien Roger
بایگانی مجموعه ها ("فیدهای غیر فعال" status)
When? This feed was archived on October 23, 2024 10:10 (
Why? فیدهای غیر فعال status. سرورهای ما، برای یک دوره پایدار، قادر به بازیابی یک فید پادکست معتبر نبوده اند.
What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.
Manage episode 426161034 series 3337166
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An issue with training schemers with supervised fine-tuning, published by Fabien Roger on June 27, 2024 on The AI Alignment Forum.
This is a rough write-up of ideas by Ryan Greenblatt. The core ideas are his and mistakes are mine. Thanks to him, Buck Shlegeris and to Aghyad Deeb for feedback on the draft of this post.
One way to use
scheming AIs is to train them to imitate humans. When does imitation robustly avoid frequent and intentional failures?
On single-token classification tasks, imitation works great and can allow you to get a lot of high-quality labor with a tiny rate of intentional failures;
On tasks that involve generating many tokens, imitation potentially fails by default: schemers could notice when an output was generated by themselves, which enables them to consistently behave badly in deployment while doing best-effort imitation during training.
We describe two techniques that avoid imitation failures on generation tasks:
A technique that works perfectly but is too expensive
A technique that we think works in practice
While pure imitation may not be sufficiently competitive, we think it could prove useful for tasks such as:
(No-CoT) single-token classification;
KL-regularization for RL: we may want to penalize models trained with RL for straying too far from a human imitation prior (and we would want this human prior to not be malign!) (see Appendix);
Tasks for which it is fine to take the competitiveness hit from not using RL, but which weaker
trusted models are not good enough at.[1]
Assumptions
In this post, we focus on scenarios where:
AIs are smart (around human-level or above);
AIs are scheming and know how we are training and deploying them;
We want to use these AIs productively (e.g. to do safety research)[2];
Training and deployment tasks are indistinguishable (e.g. because we randomly sampled training tasks from the deployment distribution);
SGD works: we ignore
gradient hacking and AI architectures where SGD is very sample inefficient;
AIs can't cause catastrophes in very few actions (e.g. because they are too weak to break safety measures that address these sorts of concerns);
Even when AIs try their best, they can't perfectly predict the probability distribution over the actions that humans[3] would take.
Easy-mode: imitating human judgment on classification tasks
Here is how you can use an AI to 100x the speed at which you can complete a billion single-token (IID) classification tasks (e.g. detecting software bugs, scoring project ideas, …):
1. Sample 1% of the tasks, and ask humans to label them (using their judgment);
2. Train the AI on that 1% of tasks you have solved yourself, using normal training (e.g. SGD with a cross-entropy loss);
3. Use the AI to solve the remaining 99% of tasks.
This will get you an AI which failed intentionally at most a few times in the whole training set because intentional failures would have quickly been trained away[4]. Therefore, you got at most a few hundred intentional failures in the whole set of tasks. That is an intentional failure rate at most around 1/10M, not bad![5]
In other words, human imitation avoids intentional
low-stakes failures: the model won't often be worse than a best-effort imitation of a human. (But best-effort imitation of a human could be catastrophic if that isn't sufficiently capable.)
The basic problem with imitating humans on generation tasks
But most tasks we care about involve generating many tokens (e.g. coding a complex experiment, writing a detailed project proposal, …). Does imitation work for such generation tasks? Not quite, here is a problem that could arise if you train an AI to imitate human answers on 1% of tasks and use it on the remaining 99% without additional supervision:
On the 1% of training tasks, the AI tries its best to imitate human answers when the star...
392 قسمت
بایگانی مجموعه ها ("فیدهای غیر فعال" status)
When?
This feed was archived on October 23, 2024 10:10 (
Why? فیدهای غیر فعال status. سرورهای ما، برای یک دوره پایدار، قادر به بازیابی یک فید پادکست معتبر نبوده اند.
What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.
Manage episode 426161034 series 3337166
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An issue with training schemers with supervised fine-tuning, published by Fabien Roger on June 27, 2024 on The AI Alignment Forum.
This is a rough write-up of ideas by Ryan Greenblatt. The core ideas are his and mistakes are mine. Thanks to him, Buck Shlegeris and to Aghyad Deeb for feedback on the draft of this post.
One way to use
scheming AIs is to train them to imitate humans. When does imitation robustly avoid frequent and intentional failures?
On single-token classification tasks, imitation works great and can allow you to get a lot of high-quality labor with a tiny rate of intentional failures;
On tasks that involve generating many tokens, imitation potentially fails by default: schemers could notice when an output was generated by themselves, which enables them to consistently behave badly in deployment while doing best-effort imitation during training.
We describe two techniques that avoid imitation failures on generation tasks:
A technique that works perfectly but is too expensive
A technique that we think works in practice
While pure imitation may not be sufficiently competitive, we think it could prove useful for tasks such as:
(No-CoT) single-token classification;
KL-regularization for RL: we may want to penalize models trained with RL for straying too far from a human imitation prior (and we would want this human prior to not be malign!) (see Appendix);
Tasks for which it is fine to take the competitiveness hit from not using RL, but which weaker
trusted models are not good enough at.[1]
Assumptions
In this post, we focus on scenarios where:
AIs are smart (around human-level or above);
AIs are scheming and know how we are training and deploying them;
We want to use these AIs productively (e.g. to do safety research)[2];
Training and deployment tasks are indistinguishable (e.g. because we randomly sampled training tasks from the deployment distribution);
SGD works: we ignore
gradient hacking and AI architectures where SGD is very sample inefficient;
AIs can't cause catastrophes in very few actions (e.g. because they are too weak to break safety measures that address these sorts of concerns);
Even when AIs try their best, they can't perfectly predict the probability distribution over the actions that humans[3] would take.
Easy-mode: imitating human judgment on classification tasks
Here is how you can use an AI to 100x the speed at which you can complete a billion single-token (IID) classification tasks (e.g. detecting software bugs, scoring project ideas, …):
1. Sample 1% of the tasks, and ask humans to label them (using their judgment);
2. Train the AI on that 1% of tasks you have solved yourself, using normal training (e.g. SGD with a cross-entropy loss);
3. Use the AI to solve the remaining 99% of tasks.
This will get you an AI which failed intentionally at most a few times in the whole training set because intentional failures would have quickly been trained away[4]. Therefore, you got at most a few hundred intentional failures in the whole set of tasks. That is an intentional failure rate at most around 1/10M, not bad![5]
In other words, human imitation avoids intentional
low-stakes failures: the model won't often be worse than a best-effort imitation of a human. (But best-effort imitation of a human could be catastrophic if that isn't sufficiently capable.)
The basic problem with imitating humans on generation tasks
But most tasks we care about involve generating many tokens (e.g. coding a complex experiment, writing a detailed project proposal, …). Does imitation work for such generation tasks? Not quite, here is a problem that could arise if you train an AI to imitate human answers on 1% of tasks and use it on the remaining 99% without additional supervision:
On the 1% of training tasks, the AI tries its best to imitate human answers when the star...
392 قسمت
همه قسمت ها
×
1 AF - The Obliqueness Thesis by Jessica Taylor 30:04

1 AF - Secret Collusion: Will We Know When to Unplug AI? by schroederdewitt 57:38

1 AF - Estimating Tail Risk in Neural Networks by Jacob Hilton 41:11

1 AF - Can startups be impactful in AI safety? by Esben Kran 11:54

1 AF - How difficult is AI Alignment? by Samuel Dylan Martin 39:38

1 AF - Contra papers claiming superhuman AI forecasting by nikos 14:36

1 AF - AI forecasting bots incoming by Dan H 7:53

1 AF - Backdoors as an analogy for deceptive alignment by Jacob Hilton 14:45

1 AF - Conflating value alignment and intent alignment is causing confusion by Seth Herd 13:40

1 AF - Is there any rigorous work on using anthropic uncertainty to prevent situational awareness / deception? by David Scott Krueger 1:01

1 AF - The Checklist: What Succeeding at AI Safety Will Involve by Sam Bowman 35:25

1 AF - Survey: How Do Elite Chinese Students Feel About the Risks of AI? by Nick Corvino 19:38

1 AF - Can a Bayesian Oracle Prevent Harm from an Agent? (Bengio et al. 2024) by Matt MacDermott 8:04

1 AF - Epistemic states as a potential benign prior by Tamsin Leake 13:38

1 AF - AIS terminology proposal: standardize terms for probability ranges by Egg Syntax 5:24

1 AF - Solving adversarial attacks in computer vision as a baby version of general AI alignment by stanislavfort 12:34

1 AF - Would catching your AIs trying to escape convince AI developers to slow down or undeploy? by Buck Shlegeris 5:55

1 AF - Owain Evans on Situational Awareness and Out-of-Context Reasoning in LLMs by Michaël Trazzi 8:33

1 AF - Showing SAE Latents Are Not Atomic Using Meta-SAEs by Bart Bussmann 35:53

1 AF - Invitation to lead a project at AI Safety Camp (Virtual Edition, 2025) by Linda Linsefors 7:27

1 AF - Interoperable High Level Structures: Early Thoughts on Adjectives by johnswentworth 12:28

1 AF - A Robust Natural Latent Over A Mixed Distribution Is Natural Over The Distributions Which Were Mixed by johnswentworth 8:37

1 AF - Measuring Structure Development in Algorithmic Transformers by Jasmina Nasufi 18:17

1 AF - AGI Safety and Alignment at Google DeepMind: A Summary of Recent Work by Rohin Shah 16:31

1 AF - Finding Deception in Language Models by Esben Kran 7:36

1 AF - Limitations on Formal Verification for AI Safety by Andrew Dickson 37:37

1 AF - Clarifying alignment vs capabilities by Richard Ngo 13:26

1 AF - Untrustworthy models: a frame for scheming evaluations by Olli Järviniemi 15:38

1 AF - Calendar feature geometry in GPT-2 layer 8 residual stream SAEs by Patrick Leask 7:17

1 AF - Fields that I reference when thinking about AI takeover prevention by Buck Shlegeris 17:03
به Player FM خوش آمدید!
Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.