Looks like the publisher may have taken this series offline or changed its URL. Please contact support if you believe it should be working, the feed URL is invalid, or you have any other concerns about it.
با برنامه Player FM !
پادکست هایی که ارزش شنیدن دارند
حمایت شده


AF - Finding Deception in Language Models by Esben Kran
بایگانی مجموعه ها ("فیدهای غیر فعال" status)
When? This feed was archived on October 23, 2024 10:10 (
Why? فیدهای غیر فعال status. سرورهای ما، برای یک دوره پایدار، قادر به بازیابی یک فید پادکست معتبر نبوده اند.
What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.
Manage episode 435389747 series 3337166
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Finding Deception in Language Models, published by Esben Kran on August 20, 2024 on The AI Alignment Forum.
This June, Apart Research and Apollo Research joined forces to host the Deception Detection Hackathon. Bringing together students, researchers, and engineers from around the world to tackle a pressing challenge in AI safety; preventing AI from deceiving humans and overseers.
The hackathon took place both online and in multiple physical locations simultaneously. Marius Hobbhahn, the CEO of Apollo Research, kicked off the hackathon with a keynote talk about evaluating deception in AI with white-box and black-box methods. You can watch his talk
here. We also had talks by Jacob Haimes, an Apart fellow, and Mikita Balesni, a research scientist at Apollo Research.
This post details the top 8 projects, multiple of which are currently being developed further in Apart Lab. In total, we had 189 participants and 21 final entries.
Sandbagging Detection through Model Degradation
By Cam Tice, Philipp Alexander Kreer, Fedor Ryzhenkov, Nathan Helm-Burger, Prithviraj Singh Shahani
The team introduced a new approach to detect AI sandbagging, a form of deception where an AI model strategically underperforms during evaluation to hide its true capabilities. The assumption behind their project is that sandbagging is a more complex task than showing the true capabilities.
In the graph above, you can see how they gradually add noise to the model's parameters and evaluate the models during this process. Sandbagging models consistently improve in performance right before the noise level causes a normal model to lose the capability.
It is a straightforward, inexpensive technique that doesn't require prior assumptions about how sandbagging is represented within the model. They successfully detected sandbagging behavior in two open-source LLMs: An 8B Llama 3 Instruct model and Phi 3 small instruct model. This approach offers a practical and accessible method for identifying potential deception in AI systems.
Read the full project here.
Detecting and Controlling Deceptive Representation in LLMs with Representational Engineering
By Avyay M Casheekar, Kaushik Sanjay Prabhakar, Kanishk Rath, Sienka Dounia
The team focused on detecting and mitigating sandbagging in AI models using recent methods for understanding activations. They fine-tuned a Llama 3 8B model on a custom dataset of deceptive and honest scenarios, then used Linear Artificial Tomography (LAT) scans to detect deceptive behaviors in the internal representations of the LLM.
The team's technique not only identified deceptive behaviors, but also showed promise in mitigating them. Importantly, their approach demonstrated good generalization to out-of-distribution scenarios, suggesting its potential robustness in real-world applications.
Read the full project here.
Detecting Deception in GPT-3.5-turbo: A Metadata-Based Approach
By Siddharth Reddy Bakkireddy, Rakesh Reddy Bakkireddy
This team tackled the challenge of detecting deception in closed-source, black-box models like GPT-3.5-turbo. They investigated whether LLM API response metadata such as token count, response time, and tokens per second could be used to detect intentional deceptive behavior.
The team analyzed 300 prompts and generated 1,200 responses from GPT-3.5-turbo, comparing the metadata of baseline and deceptive outputs. Their results revealed that deceptive outputs tend to have increased response times and altered token usage. This approach demonstrates that deception detection is possible without accessing a model's internal representation, opening up new avenues for monitoring and safeguarding AI systems, even when their inner workings are not accessible.
Read the full project here.
Modelling the Oversight of Automated Interpretability Against Deceptive Agents on Sp...
392 قسمت
بایگانی مجموعه ها ("فیدهای غیر فعال" status)
When?
This feed was archived on October 23, 2024 10:10 (
Why? فیدهای غیر فعال status. سرورهای ما، برای یک دوره پایدار، قادر به بازیابی یک فید پادکست معتبر نبوده اند.
What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.
Manage episode 435389747 series 3337166
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Finding Deception in Language Models, published by Esben Kran on August 20, 2024 on The AI Alignment Forum.
This June, Apart Research and Apollo Research joined forces to host the Deception Detection Hackathon. Bringing together students, researchers, and engineers from around the world to tackle a pressing challenge in AI safety; preventing AI from deceiving humans and overseers.
The hackathon took place both online and in multiple physical locations simultaneously. Marius Hobbhahn, the CEO of Apollo Research, kicked off the hackathon with a keynote talk about evaluating deception in AI with white-box and black-box methods. You can watch his talk
here. We also had talks by Jacob Haimes, an Apart fellow, and Mikita Balesni, a research scientist at Apollo Research.
This post details the top 8 projects, multiple of which are currently being developed further in Apart Lab. In total, we had 189 participants and 21 final entries.
Sandbagging Detection through Model Degradation
By Cam Tice, Philipp Alexander Kreer, Fedor Ryzhenkov, Nathan Helm-Burger, Prithviraj Singh Shahani
The team introduced a new approach to detect AI sandbagging, a form of deception where an AI model strategically underperforms during evaluation to hide its true capabilities. The assumption behind their project is that sandbagging is a more complex task than showing the true capabilities.
In the graph above, you can see how they gradually add noise to the model's parameters and evaluate the models during this process. Sandbagging models consistently improve in performance right before the noise level causes a normal model to lose the capability.
It is a straightforward, inexpensive technique that doesn't require prior assumptions about how sandbagging is represented within the model. They successfully detected sandbagging behavior in two open-source LLMs: An 8B Llama 3 Instruct model and Phi 3 small instruct model. This approach offers a practical and accessible method for identifying potential deception in AI systems.
Read the full project here.
Detecting and Controlling Deceptive Representation in LLMs with Representational Engineering
By Avyay M Casheekar, Kaushik Sanjay Prabhakar, Kanishk Rath, Sienka Dounia
The team focused on detecting and mitigating sandbagging in AI models using recent methods for understanding activations. They fine-tuned a Llama 3 8B model on a custom dataset of deceptive and honest scenarios, then used Linear Artificial Tomography (LAT) scans to detect deceptive behaviors in the internal representations of the LLM.
The team's technique not only identified deceptive behaviors, but also showed promise in mitigating them. Importantly, their approach demonstrated good generalization to out-of-distribution scenarios, suggesting its potential robustness in real-world applications.
Read the full project here.
Detecting Deception in GPT-3.5-turbo: A Metadata-Based Approach
By Siddharth Reddy Bakkireddy, Rakesh Reddy Bakkireddy
This team tackled the challenge of detecting deception in closed-source, black-box models like GPT-3.5-turbo. They investigated whether LLM API response metadata such as token count, response time, and tokens per second could be used to detect intentional deceptive behavior.
The team analyzed 300 prompts and generated 1,200 responses from GPT-3.5-turbo, comparing the metadata of baseline and deceptive outputs. Their results revealed that deceptive outputs tend to have increased response times and altered token usage. This approach demonstrates that deception detection is possible without accessing a model's internal representation, opening up new avenues for monitoring and safeguarding AI systems, even when their inner workings are not accessible.
Read the full project here.
Modelling the Oversight of Automated Interpretability Against Deceptive Agents on Sp...
392 قسمت
همه قسمت ها
×
1 AF - The Obliqueness Thesis by Jessica Taylor 30:04

1 AF - Secret Collusion: Will We Know When to Unplug AI? by schroederdewitt 57:38

1 AF - Estimating Tail Risk in Neural Networks by Jacob Hilton 41:11

1 AF - Can startups be impactful in AI safety? by Esben Kran 11:54

1 AF - How difficult is AI Alignment? by Samuel Dylan Martin 39:38

1 AF - Contra papers claiming superhuman AI forecasting by nikos 14:36

1 AF - AI forecasting bots incoming by Dan H 7:53

1 AF - Backdoors as an analogy for deceptive alignment by Jacob Hilton 14:45

1 AF - Conflating value alignment and intent alignment is causing confusion by Seth Herd 13:40

1 AF - Is there any rigorous work on using anthropic uncertainty to prevent situational awareness / deception? by David Scott Krueger 1:01

1 AF - The Checklist: What Succeeding at AI Safety Will Involve by Sam Bowman 35:25

1 AF - Survey: How Do Elite Chinese Students Feel About the Risks of AI? by Nick Corvino 19:38

1 AF - Can a Bayesian Oracle Prevent Harm from an Agent? (Bengio et al. 2024) by Matt MacDermott 8:04

1 AF - Epistemic states as a potential benign prior by Tamsin Leake 13:38

1 AF - AIS terminology proposal: standardize terms for probability ranges by Egg Syntax 5:24

1 AF - Solving adversarial attacks in computer vision as a baby version of general AI alignment by stanislavfort 12:34

1 AF - Would catching your AIs trying to escape convince AI developers to slow down or undeploy? by Buck Shlegeris 5:55

1 AF - Owain Evans on Situational Awareness and Out-of-Context Reasoning in LLMs by Michaël Trazzi 8:33

1 AF - Showing SAE Latents Are Not Atomic Using Meta-SAEs by Bart Bussmann 35:53

1 AF - Invitation to lead a project at AI Safety Camp (Virtual Edition, 2025) by Linda Linsefors 7:27

1 AF - Interoperable High Level Structures: Early Thoughts on Adjectives by johnswentworth 12:28

1 AF - A Robust Natural Latent Over A Mixed Distribution Is Natural Over The Distributions Which Were Mixed by johnswentworth 8:37

1 AF - Measuring Structure Development in Algorithmic Transformers by Jasmina Nasufi 18:17

1 AF - AGI Safety and Alignment at Google DeepMind: A Summary of Recent Work by Rohin Shah 16:31

1 AF - Finding Deception in Language Models by Esben Kran 7:36

1 AF - Limitations on Formal Verification for AI Safety by Andrew Dickson 37:37

1 AF - Clarifying alignment vs capabilities by Richard Ngo 13:26

1 AF - Untrustworthy models: a frame for scheming evaluations by Olli Järviniemi 15:38

1 AF - Calendar feature geometry in GPT-2 layer 8 residual stream SAEs by Patrick Leask 7:17

1 AF - Fields that I reference when thinking about AI takeover prevention by Buck Shlegeris 17:03

1 AF - Extracting SAE task features for ICL by Dmitrii Kharlapenko 17:20

1 AF - In Defense of Open-Minded UDT by Abram Demski 23:26

1 AF - You can remove GPT2's LayerNorm by fine-tuning for an hour by Stefan Heimersheim 19:03

1 AF - Inference-Only Debate Experiments Using Math Problems by Arjun Panickssery 4:41

1 AF - Self-explaining SAE features by Dmitrii Kharlapenko 19:36

1 AF - The Bitter Lesson for AI Safety Research by Adam Khoja 6:33

1 AF - A Simple Toy Coherence Theorem by johnswentworth 11:59

1 AF - The 'strong' feature hypothesis could be wrong by lewis smith 31:14

1 AF - The need for multi-agent experiments by Martín Soto 17:17

1 AF - Against AI As An Existential Risk by Noah Birnbaum 0:44

1 AF - Self-Other Overlap: A Neglected Approach to AI Alignment by Marc Carauleanu 19:29

1 AF - Investigating the Ability of LLMs to Recognize Their Own Writing by Christopher Ackerman 23:09

1 AF - Can Generalized Adversarial Testing Enable More Rigorous LLM Safety Evals? by Stephen Casper 8:19

1 AF - AXRP Episode 34 - AI Evaluations with Beth Barnes by DanielFilan 1:37:11

1 AF - A Solomonoff Inductor Walks Into a Bar: Schelling Points for Communication by johnswentworth 35:18

1 AF - Pacing Outside the Box: RNNs Learn to Plan in Sokoban by Adrià Garriga-Alonso 3:34

1 AF - Does robustness improve with scale? by ChengCheng 2:16

1 AF - AI Constitutions are a tool to reduce societal scale risk by Samuel Dylan Martin 35:15

1 AF - A framework for thinking about AI power-seeking by Joe Carlsmith 30:52

1 AF - ML Safety Research Advice - GabeM by Gabe M 24:35

1 AF - Analyzing DeepMind's Probabilistic Methods for Evaluating Agent Capabilities by Axel Højmark 32:07

1 AF - Auto-Enhance: Developing a meta-benchmark to measure LLM agents' ability to improve other agents by Sam Brown 26:02

1 AF - Coalitional agency by Richard Ngo 11:23

1 AF - aimless ace analyzes active amateur: a micro-aaaaalignment proposal by Luke H Miles 1:45

1 AF - A more systematic case for inner misalignment by Richard Ngo 9:14

1 AF - BatchTopK: A Simple Improvement for TopK-SAEs by Bart Bussmann 7:17

1 AF - Feature Targeted LLC Estimation Distinguishes SAE Features from Random Directions by Lidor Banuel Dabbah 29:18

1 AF - Truth is Universal: Robust Detection of Lies in LLMs by Lennart Buerger 4:49

1 AF - JumpReLU SAEs + Early Access to Gemma 2 SAEs by Neel Nanda 2:42

1 AF - A List of 45+ Mech Interp Project Ideas from Apollo Research's Interpretability Team by Lee Sharkey 32:24

1 AF - SAEs (usually) Transfer Between Base and Chat Models by Connor Kissane 19:23

1 AF - Simplifying Corrigibility - Subagent Corrigibility Is Not Anti-Natural by Rubi Hudson 7:56

1 AF - An Introduction to Representation Engineering - an activation-based paradigm for controlling LLMs by Jan Wehner 35:35

1 AF - Stitching SAEs of different sizes by Bart Bussmann 21:12

1 AF - A simple case for extreme inner misalignment by Richard Ngo 12:20

1 AF - Timaeus is hiring! by Jesse Hoogland 4:04

1 AF - Games for AI Control by Charlie Griffin 7:37

1 AF - UC Berkeley course on LLMs and ML Safety by Dan H 0:52

1 AF - Consent across power differentials by Ramana Kumar 4:26

1 AF - Response to Dileep George: AGI safety warrants planning ahead by Steve Byrnes 48:04

1 AF - On scalable oversight with weak LLMs judging strong LLMs by Zachary Kenton 11:18

1 AF - An Extremely Opinionated Annotated List of My Favourite Mechanistic Interpretability Papers v2 by Neel Nanda 38:20

1 AF - Scalable oversight as a quantitative rather than qualitative problem by Buck Shlegeris 4:41

1 AF - What progress have we made on automated auditing? by Lawrence Chan 2:50

1 AF - A "Bitter Lesson" Approach to Aligning AGI and ASI by Roger Dearnaley 41:17

1 AF - [Interim research report] Activation plateaus and sensitive directions in GPT2 by Stefan Heimersheim 20:02

1 AF - Finding the Wisdom to Build Safe AI by Gordon Seidoh Worley 13:27

1 AF - Decomposing the QK circuit with Bilinear Sparse Dictionary Learning by keith wynroe 21:25

1 AF - OthelloGPT learned a bag of heuristics by jylin04 16:23

1 AF - Covert Malicious Finetuning by Tony Wang 6:30

1 AF - Interpreting Preference Models w/ Sparse Autoencoders by Logan Riggs Smith 15:43

1 AF - Representation Tuning by Christopher Ackerman 13:07

1 AF - An issue with training schemers with supervised fine-tuning by Fabien Roger 16:33

1 AF - Live Theory Part 0: Taking Intelligence Seriously by Sahil 20:19

1 AF - Instrumental vs Terminal Desiderata by Max Harms 5:14

1 AF - What is a Tool? by johnswentworth 10:14

1 AF - Formal verification, heuristic explanations and surprise accounting by Jacob Hilton 14:48

1 AF - Compact Proofs of Model Performance via Mechanistic Interpretability by Lawrence Chan 12:47

1 AF - SAE feature geometry is outside the superposition hypothesis by Jake Mendel 18:10

1 AF - LLM Generality is a Timeline Crux by Egg Syntax 15:05

1 AF - Different senses in which two AIs can be "the same" by Vivek Hebbar 8:32

1 AF - Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Training Data by Johannes Treutlein 14:59

1 AF - Attention Output SAEs Improve Circuit Analysis by Connor Kissane 32:55

1 AF - Debate, Oracles, and Obfuscated Arguments by Jonah Brown-Cohen 34:03

1 AF - Jailbreak steering generalization by Sarah Ball 4:05

1 AF - Beyond the Board: Exploring AI Robustness Through Go by AdamGleave 1:35

1 AF - Sycophancy to subterfuge: Investigating reward tampering in large language models by Evan Hubinger 13:00

1 AF - Analysing Adversarial Attacks with Linear Probing by Yoann Poupart 13:16

1 AF - Degeneracies are sticky for SGD by Guillaume Corlouer 30:23

1 AF - Self-Control of LLM Behaviors by Compressing Suffix Gradient into Prefix Controller by Henry Cai 15:40

1 AF - Shard Theory - is it true for humans? by ErisApprentice 27:18

1 AF - Shard Theory - is it true for humans? by Rishika Bose 27:14

1 AF - Fine-tuning is not sufficient for capability elicitation by Theodore Chapman 15:20

1 AF - Safety isn't safety without a social model (or: dispelling the myth of per se technical safety) by Andrew Critch 7:32

1 AF - [Paper] AI Sandbagging: Language Models can Strategically Underperform on Evaluations by Teun van der Weij 3:18

1 AF - AXRP Episode 33 - RLHF Problems with Scott Emmons by DanielFilan 1:21:54

1 AF - Corrigibility could make things worse by ThomasCederborg 9:07

1 AF - 5. Open Corrigibility Questions by Max Harms 11:16

1 AF - 3a. Towards Formal Corrigibility by Max Harms 32:13

1 AF - 3b. Formal (Faux) Corrigibility by Max Harms 30:16

1 AF - 2. Corrigibility Intuition by Max Harms 52:30

1 AF - Access to powerful AI might make computer security radically easier by Buck Shlegeris 9:21

1 AF - 1. The CAST Strategy by Max Harms 1:08:04

1 AF - 0. CAST: Corrigibility as Singular Target by Max Harms 15:44

1 AF - Memorizing weak examples can elicit strong behavior out of password-locked models by Fabien Roger 11:47

1 AF - Scaling and evaluating sparse autoencoders by leogao 1:37

1 AF - [Link Post] "Foundational Challenges in Assuring Alignment and Safety of Large Language Models" by David Scott Krueger 11:47

1 AF - Calculating Natural Latents via Resampling by johnswentworth 17:36

1 AF - SAEs Discover Meaningful Features in the IOI Task by Alex Makelov 19:04

1 AF - Announcing ILIAD - Theoretical AI Alignment Conference by Nora Ammann 3:07

1 AF - On "first critical tries" in AI alignment by Joe Carlsmith 32:17

1 AF - Takeoff speeds presentation at Anthropic by Tom Davidson 37:55

1 AF - Is This Lie Detector Really Just a Lie Detector? An Investigation of LLM Probe Specificity. by Josh Levy 32:08

1 AF - [Paper] Stress-testing capability elicitation by training password-locked models by Fabien Roger 22:23

1 AF - AI catastrophes and rogue deployments by Buck Shlegeris 13:53

1 AF - AI Safety: A Climb To Armageddon? by kmenou 1:11

1 AF - We might be dropping the ball on Autonomous Replication and Adaptation. by Charbel-Raphael Segerie 7:28

1 AF - There Should Be More Alignment-Driven Startups! by Matthew "Vaniver" Gray 22:20

1 AF - Clarifying METR's Auditing Role by Beth Barnes 3:46

1 AF - AXRP Episode 32 - Understanding Agency with Jan Kulveit by DanielFilan 1:16:47

1 AF - Apollo Research 1-year update by Marius Hobbhahn 14:08

1 AF - Reward hacking behavior can generalize across tasks by Kei Nishimura-Gasparian 58:49

1 AF - Paper in Science: Managing extreme AI risks amid rapid progress by JanB 1:58

1 AF - Announcing Human-aligned AI Summer School by Jan Kulveit 2:47

1 AF - EIS XIII: Reflections on Anthropic's SAE Research Circa May 2024 by Stephen Casper 5:50
به Player FM خوش آمدید!
Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.