Looks like the publisher may have taken this series offline or changed its URL. Please contact support if you believe it should be working, the feed URL is invalid, or you have any other concerns about it.
با برنامه Player FM !
پادکست هایی که ارزش شنیدن دارند
حمایت شده
AF - Sycophancy to subterfuge: Investigating reward tampering in large language models by Evan Hubinger
بایگانی مجموعه ها ("فیدهای غیر فعال" status)
When? This feed was archived on October 23, 2024 10:10 (
Why? فیدهای غیر فعال status. سرورهای ما، برای یک دوره پایدار، قادر به بازیابی یک فید پادکست معتبر نبوده اند.
What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.
Manage episode 424153726 series 3337166
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sycophancy to subterfuge: Investigating reward tampering in large language models, published by Evan Hubinger on June 17, 2024 on The AI Alignment Forum.
New Anthropic model organisms research paper led by Carson Denison from the Alignment Stress-Testing Team demonstrating that large language models can generalize zero-shot from simple reward-hacks (sycophancy) to more complex reward tampering (subterfuge).
Our results suggest that accidentally incentivizing simple reward-hacks such as sycophancy can have dramatic and very difficult to reverse consequences for how models generalize, up to and including generalization to editing their own reward functions and covering up their tracks when doing so.
Abstract:
In reinforcement learning, specification gaming occurs when AI systems learn undesired behaviors that are highly rewarded due to misspecified training goals. Specification gaming can range from simple behaviors like sycophancy to sophisticated and pernicious behaviors like reward-tampering, where a model directly modifies its own reward mechanism. However, these more pernicious behaviors may be too complex to be discovered via exploration.
In this paper, we study whether Large Language Model (LLM) assistants which find easily discovered forms of specification gaming will generalize to perform rarer and more blatant forms, up to and including reward-tampering. We construct a curriculum of increasingly sophisticated gameable environments and find that training on early-curriculum environments leads to more specification gaming on remaining environments.
Strikingly, a small but non-negligible proportion of the time, LLM assistants trained on the full curriculum generalize zero-shot to directly rewriting their own reward function. Retraining an LLM not to game early-curriculum environments mitigates, but does not eliminate, reward-tampering in later environments. Moreover, adding harmlessness training to our gameable environments does not prevent reward-tampering.
These results demonstrate that LLMs can generalize from common forms of specification gaming to more pernicious reward tampering and that such behavior may be nontrivial to remove.
Twitter thread:
New Anthropic research: Investigating Reward Tampering.
Could AI models learn to hack their own reward system?
In a new paper, we show they can, by generalization from training in simpler settings.
Read our blog post here: https://anthropic.com/research/reward-tampering
We find that models generalize, without explicit training, from easily-discoverable dishonest strategies like sycophancy to more concerning behaviors like premeditated lying - and even direct modification of their reward function.
We designed a curriculum of increasingly complex environments with misspecified reward functions.
Early on, AIs discover dishonest strategies like insincere flattery. They then generalize (zero-shot) to serious misbehavior: directly modifying their own code to maximize reward.
Does training models to be helpful, honest, and harmless (HHH) mean they don't generalize to hack their own code?
Not in our setting. Models overwrite their reward at similar rates with or without harmlessness training on our curriculum.
Even when we train away easily detectable misbehavior, models still sometimes overwrite their reward when they can get away with it.
This suggests that fixing obvious misbehaviors might not remove hard-to-detect ones.
Our work provides empirical evidence that serious misalignment can emerge from seemingly benign reward misspecification.
Read the full paper: https://arxiv.org/abs/2406.10162
The Anthropic Alignment Science team is actively hiring research engineers and scientists. We'd love to see your application: https://boards.greenhouse.io/anthropic/jobs/4009165008
Blog post:
Perverse incentives are everywhere. Thi...
392 قسمت
بایگانی مجموعه ها ("فیدهای غیر فعال" status)
When?
This feed was archived on October 23, 2024 10:10 (
Why? فیدهای غیر فعال status. سرورهای ما، برای یک دوره پایدار، قادر به بازیابی یک فید پادکست معتبر نبوده اند.
What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.
Manage episode 424153726 series 3337166
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sycophancy to subterfuge: Investigating reward tampering in large language models, published by Evan Hubinger on June 17, 2024 on The AI Alignment Forum.
New Anthropic model organisms research paper led by Carson Denison from the Alignment Stress-Testing Team demonstrating that large language models can generalize zero-shot from simple reward-hacks (sycophancy) to more complex reward tampering (subterfuge).
Our results suggest that accidentally incentivizing simple reward-hacks such as sycophancy can have dramatic and very difficult to reverse consequences for how models generalize, up to and including generalization to editing their own reward functions and covering up their tracks when doing so.
Abstract:
In reinforcement learning, specification gaming occurs when AI systems learn undesired behaviors that are highly rewarded due to misspecified training goals. Specification gaming can range from simple behaviors like sycophancy to sophisticated and pernicious behaviors like reward-tampering, where a model directly modifies its own reward mechanism. However, these more pernicious behaviors may be too complex to be discovered via exploration.
In this paper, we study whether Large Language Model (LLM) assistants which find easily discovered forms of specification gaming will generalize to perform rarer and more blatant forms, up to and including reward-tampering. We construct a curriculum of increasingly sophisticated gameable environments and find that training on early-curriculum environments leads to more specification gaming on remaining environments.
Strikingly, a small but non-negligible proportion of the time, LLM assistants trained on the full curriculum generalize zero-shot to directly rewriting their own reward function. Retraining an LLM not to game early-curriculum environments mitigates, but does not eliminate, reward-tampering in later environments. Moreover, adding harmlessness training to our gameable environments does not prevent reward-tampering.
These results demonstrate that LLMs can generalize from common forms of specification gaming to more pernicious reward tampering and that such behavior may be nontrivial to remove.
Twitter thread:
New Anthropic research: Investigating Reward Tampering.
Could AI models learn to hack their own reward system?
In a new paper, we show they can, by generalization from training in simpler settings.
Read our blog post here: https://anthropic.com/research/reward-tampering
We find that models generalize, without explicit training, from easily-discoverable dishonest strategies like sycophancy to more concerning behaviors like premeditated lying - and even direct modification of their reward function.
We designed a curriculum of increasingly complex environments with misspecified reward functions.
Early on, AIs discover dishonest strategies like insincere flattery. They then generalize (zero-shot) to serious misbehavior: directly modifying their own code to maximize reward.
Does training models to be helpful, honest, and harmless (HHH) mean they don't generalize to hack their own code?
Not in our setting. Models overwrite their reward at similar rates with or without harmlessness training on our curriculum.
Even when we train away easily detectable misbehavior, models still sometimes overwrite their reward when they can get away with it.
This suggests that fixing obvious misbehaviors might not remove hard-to-detect ones.
Our work provides empirical evidence that serious misalignment can emerge from seemingly benign reward misspecification.
Read the full paper: https://arxiv.org/abs/2406.10162
The Anthropic Alignment Science team is actively hiring research engineers and scientists. We'd love to see your application: https://boards.greenhouse.io/anthropic/jobs/4009165008
Blog post:
Perverse incentives are everywhere. Thi...
392 قسمت
همه قسمت ها
×1 AF - The Obliqueness Thesis by Jessica Taylor 30:04
1 AF - Secret Collusion: Will We Know When to Unplug AI? by schroederdewitt 57:38
1 AF - Estimating Tail Risk in Neural Networks by Jacob Hilton 41:11
1 AF - Can startups be impactful in AI safety? by Esben Kran 11:54
1 AF - How difficult is AI Alignment? by Samuel Dylan Martin 39:38
1 AF - Contra papers claiming superhuman AI forecasting by nikos 14:36
1 AF - AI forecasting bots incoming by Dan H 7:53
1 AF - Backdoors as an analogy for deceptive alignment by Jacob Hilton 14:45
1 AF - Conflating value alignment and intent alignment is causing confusion by Seth Herd 13:40
1 AF - Is there any rigorous work on using anthropic uncertainty to prevent situational awareness / deception? by David Scott Krueger 1:01
1 AF - The Checklist: What Succeeding at AI Safety Will Involve by Sam Bowman 35:25
1 AF - Survey: How Do Elite Chinese Students Feel About the Risks of AI? by Nick Corvino 19:38
1 AF - Can a Bayesian Oracle Prevent Harm from an Agent? (Bengio et al. 2024) by Matt MacDermott 8:04
1 AF - Epistemic states as a potential benign prior by Tamsin Leake 13:38
1 AF - AIS terminology proposal: standardize terms for probability ranges by Egg Syntax 5:24
1 AF - Solving adversarial attacks in computer vision as a baby version of general AI alignment by stanislavfort 12:34
1 AF - Would catching your AIs trying to escape convince AI developers to slow down or undeploy? by Buck Shlegeris 5:55
1 AF - Owain Evans on Situational Awareness and Out-of-Context Reasoning in LLMs by Michaël Trazzi 8:33
1 AF - Showing SAE Latents Are Not Atomic Using Meta-SAEs by Bart Bussmann 35:53
1 AF - Invitation to lead a project at AI Safety Camp (Virtual Edition, 2025) by Linda Linsefors 7:27
1 AF - Interoperable High Level Structures: Early Thoughts on Adjectives by johnswentworth 12:28
1 AF - A Robust Natural Latent Over A Mixed Distribution Is Natural Over The Distributions Which Were Mixed by johnswentworth 8:37
1 AF - Measuring Structure Development in Algorithmic Transformers by Jasmina Nasufi 18:17
1 AF - AGI Safety and Alignment at Google DeepMind: A Summary of Recent Work by Rohin Shah 16:31
1 AF - Finding Deception in Language Models by Esben Kran 7:36
1 AF - Limitations on Formal Verification for AI Safety by Andrew Dickson 37:37
1 AF - Clarifying alignment vs capabilities by Richard Ngo 13:26
1 AF - Untrustworthy models: a frame for scheming evaluations by Olli Järviniemi 15:38
1 AF - Calendar feature geometry in GPT-2 layer 8 residual stream SAEs by Patrick Leask 7:17
1 AF - Fields that I reference when thinking about AI takeover prevention by Buck Shlegeris 17:03
1 AF - Extracting SAE task features for ICL by Dmitrii Kharlapenko 17:20
1 AF - In Defense of Open-Minded UDT by Abram Demski 23:26
1 AF - You can remove GPT2's LayerNorm by fine-tuning for an hour by Stefan Heimersheim 19:03
1 AF - Inference-Only Debate Experiments Using Math Problems by Arjun Panickssery 4:41
1 AF - Self-explaining SAE features by Dmitrii Kharlapenko 19:36
1 AF - The Bitter Lesson for AI Safety Research by Adam Khoja 6:33
1 AF - A Simple Toy Coherence Theorem by johnswentworth 11:59
1 AF - The 'strong' feature hypothesis could be wrong by lewis smith 31:14
1 AF - The need for multi-agent experiments by Martín Soto 17:17
1 AF - Against AI As An Existential Risk by Noah Birnbaum 0:44
1 AF - Self-Other Overlap: A Neglected Approach to AI Alignment by Marc Carauleanu 19:29
1 AF - Investigating the Ability of LLMs to Recognize Their Own Writing by Christopher Ackerman 23:09
1 AF - Can Generalized Adversarial Testing Enable More Rigorous LLM Safety Evals? by Stephen Casper 8:19
1 AF - AXRP Episode 34 - AI Evaluations with Beth Barnes by DanielFilan 1:37:11
1 AF - A Solomonoff Inductor Walks Into a Bar: Schelling Points for Communication by johnswentworth 35:18
1 AF - Pacing Outside the Box: RNNs Learn to Plan in Sokoban by Adrià Garriga-Alonso 3:34
1 AF - Does robustness improve with scale? by ChengCheng 2:16
1 AF - AI Constitutions are a tool to reduce societal scale risk by Samuel Dylan Martin 35:15
1 AF - A framework for thinking about AI power-seeking by Joe Carlsmith 30:52
1 AF - ML Safety Research Advice - GabeM by Gabe M 24:35
1 AF - Analyzing DeepMind's Probabilistic Methods for Evaluating Agent Capabilities by Axel Højmark 32:07
1 AF - Auto-Enhance: Developing a meta-benchmark to measure LLM agents' ability to improve other agents by Sam Brown 26:02
1 AF - Coalitional agency by Richard Ngo 11:23
1 AF - aimless ace analyzes active amateur: a micro-aaaaalignment proposal by Luke H Miles 1:45
1 AF - A more systematic case for inner misalignment by Richard Ngo 9:14
1 AF - BatchTopK: A Simple Improvement for TopK-SAEs by Bart Bussmann 7:17
1 AF - Feature Targeted LLC Estimation Distinguishes SAE Features from Random Directions by Lidor Banuel Dabbah 29:18
1 AF - Truth is Universal: Robust Detection of Lies in LLMs by Lennart Buerger 4:49
1 AF - JumpReLU SAEs + Early Access to Gemma 2 SAEs by Neel Nanda 2:42
1 AF - A List of 45+ Mech Interp Project Ideas from Apollo Research's Interpretability Team by Lee Sharkey 32:24
1 AF - SAEs (usually) Transfer Between Base and Chat Models by Connor Kissane 19:23
1 AF - Simplifying Corrigibility - Subagent Corrigibility Is Not Anti-Natural by Rubi Hudson 7:56
1 AF - An Introduction to Representation Engineering - an activation-based paradigm for controlling LLMs by Jan Wehner 35:35
1 AF - Stitching SAEs of different sizes by Bart Bussmann 21:12
1 AF - A simple case for extreme inner misalignment by Richard Ngo 12:20
1 AF - Timaeus is hiring! by Jesse Hoogland 4:04
1 AF - Games for AI Control by Charlie Griffin 7:37
1 AF - UC Berkeley course on LLMs and ML Safety by Dan H 0:52
1 AF - Consent across power differentials by Ramana Kumar 4:26
1 AF - Response to Dileep George: AGI safety warrants planning ahead by Steve Byrnes 48:04
1 AF - On scalable oversight with weak LLMs judging strong LLMs by Zachary Kenton 11:18
1 AF - An Extremely Opinionated Annotated List of My Favourite Mechanistic Interpretability Papers v2 by Neel Nanda 38:20
1 AF - Scalable oversight as a quantitative rather than qualitative problem by Buck Shlegeris 4:41
1 AF - What progress have we made on automated auditing? by Lawrence Chan 2:50
1 AF - A "Bitter Lesson" Approach to Aligning AGI and ASI by Roger Dearnaley 41:17
به Player FM خوش آمدید!
Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.




















