Looks like the publisher may have taken this series offline or changed its URL. Please contact support if you believe it should be working, the feed URL is invalid, or you have any other concerns about it.
با برنامه Player FM !
پادکست هایی که ارزش شنیدن دارند
حمایت شده


AF - Can Generalized Adversarial Testing Enable More Rigorous LLM Safety Evals? by Stephen Casper
بایگانی مجموعه ها ("فیدهای غیر فعال" status)
When? This feed was archived on October 23, 2024 10:10 (
Why? فیدهای غیر فعال status. سرورهای ما، برای یک دوره پایدار، قادر به بازیابی یک فید پادکست معتبر نبوده اند.
What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.
Manage episode 431525767 series 3337166
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can Generalized Adversarial Testing Enable More Rigorous LLM Safety Evals?, published by Stephen Casper on July 30, 2024 on The AI Alignment Forum.
Thanks to Zora Che, Michael Chen, Andi Peng, Lev McKinney, Bilal Chughtai, Shashwat Goel, Domenic Rosati, and Rohit Gandikota.
TL;DR
In contrast to evaluating AI systems under normal "input-space" attacks, using "generalized," attacks, which allow an attacker to manipulate weights or activations, might be able to help us better evaluate LLMs for risks - even if they are deployed as black boxes. Here, I outline the rationale for "generalized" adversarial testing and overview current work related to it.
See also prior work in
Casper et al. (2024),
Casper et al. (2024), and
Sheshadri et al. (2024).
Even when AI systems perform well in typical circumstances, they sometimes fail in adversarial/anomalous ones. This is a persistent problem.
State-of-the-art AI systems tend to retain undesirable latent capabilities that can pose risks if they resurface. My favorite example of this is the most cliche one
many recent papers have demonstrated diverse attack techniques that can be used to elicit instructions for making a bomb from state-of-the-art LLMs.
There is an emerging consensus that, even when LLMs are fine-tuned to be harmless, they can retain latent harmful capabilities that can and do cause harm when they resurface (
Qi et al., 2024). A growing body of work on red-teaming (
Shayegani et al., 2023,
Carlini et al., 2023,
Geiping et al., 2024,
Longpre et al., 2024), interpretability (
Juneja et al., 2022,
Lubana et al., 2022,
Jain et al., 2023,
Patil et al., 2023,
Prakash et al., 2024,
Lee et al., 2024), representation editing (
Wei et al., 2024,
Schwinn et al., 2024), continual learning (
Dyer et al., 2022,
Cossu et al., 2022,
Li et al., 2022,
Scialom et al., 2022,
Luo et al., 2023,
Kotha et al., 2023,
Shi et al., 2023,
Schwarzchild et al., 2024), and fine-tuning (
Jain et al., 2023,
Yang et al., 2023,
Qi et al., 2023,
Bhardwaj et al., 2023,
Lermen et al., 2023,
Zhan et al., 2023,
Ji et al., 2024,
Hu et al., 2024,
Halawi et al., 2024) suggests that fine-tuning struggles to make fundamental changes to an LLM's inner knowledge and capabilities. For example,
Jain et al. (2023) likened fine-tuning in LLMs to merely modifying a "wrapper" around a stable, general-purpose set of latent capabilities. Even if they are generally inactive, harmful latent capabilities can pose harm if they resurface due to an attack, anomaly, or post-deployment modification (
Hendrycks et al., 2021,
Carlini et al., 2023).
We can frame the problem as such: There are hyper-astronomically many inputs for modern LLMs (e.g. there are vastly more 20-token strings than particles in the observable universe), so we can't brute-force-search over the input space to make sure they are safe.
So unless we are able to make provably safe advanced AI systems (we won't soon and probably never will), there will always be a challenge with ensuring safety - the gap between the set of failure modes that developers identify, and unforeseen ones that they don't.
This is a big challenge because of the inherent unknown-unknown nature of the problem. However, it is possible to try to infer how large this gap might be.
Taking a page from the safety engineering textbook -- when stakes are high, we should train and evaluate LLMs under threats that are at least as strong as, and ideally stronger than, ones that they will face in deployment.
First, imagine that an LLM is going to be deployed open-source (or if it could be leaked). Then, of course, the system's safety depends on what it can be modified to do. So it should be evaluated not as a black-box but as a general asset to malicious users who might enhance it through finetuning or other means. This seems obvious, but there's preced...
392 قسمت
بایگانی مجموعه ها ("فیدهای غیر فعال" status)
When?
This feed was archived on October 23, 2024 10:10 (
Why? فیدهای غیر فعال status. سرورهای ما، برای یک دوره پایدار، قادر به بازیابی یک فید پادکست معتبر نبوده اند.
What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.
Manage episode 431525767 series 3337166
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can Generalized Adversarial Testing Enable More Rigorous LLM Safety Evals?, published by Stephen Casper on July 30, 2024 on The AI Alignment Forum.
Thanks to Zora Che, Michael Chen, Andi Peng, Lev McKinney, Bilal Chughtai, Shashwat Goel, Domenic Rosati, and Rohit Gandikota.
TL;DR
In contrast to evaluating AI systems under normal "input-space" attacks, using "generalized," attacks, which allow an attacker to manipulate weights or activations, might be able to help us better evaluate LLMs for risks - even if they are deployed as black boxes. Here, I outline the rationale for "generalized" adversarial testing and overview current work related to it.
See also prior work in
Casper et al. (2024),
Casper et al. (2024), and
Sheshadri et al. (2024).
Even when AI systems perform well in typical circumstances, they sometimes fail in adversarial/anomalous ones. This is a persistent problem.
State-of-the-art AI systems tend to retain undesirable latent capabilities that can pose risks if they resurface. My favorite example of this is the most cliche one
many recent papers have demonstrated diverse attack techniques that can be used to elicit instructions for making a bomb from state-of-the-art LLMs.
There is an emerging consensus that, even when LLMs are fine-tuned to be harmless, they can retain latent harmful capabilities that can and do cause harm when they resurface (
Qi et al., 2024). A growing body of work on red-teaming (
Shayegani et al., 2023,
Carlini et al., 2023,
Geiping et al., 2024,
Longpre et al., 2024), interpretability (
Juneja et al., 2022,
Lubana et al., 2022,
Jain et al., 2023,
Patil et al., 2023,
Prakash et al., 2024,
Lee et al., 2024), representation editing (
Wei et al., 2024,
Schwinn et al., 2024), continual learning (
Dyer et al., 2022,
Cossu et al., 2022,
Li et al., 2022,
Scialom et al., 2022,
Luo et al., 2023,
Kotha et al., 2023,
Shi et al., 2023,
Schwarzchild et al., 2024), and fine-tuning (
Jain et al., 2023,
Yang et al., 2023,
Qi et al., 2023,
Bhardwaj et al., 2023,
Lermen et al., 2023,
Zhan et al., 2023,
Ji et al., 2024,
Hu et al., 2024,
Halawi et al., 2024) suggests that fine-tuning struggles to make fundamental changes to an LLM's inner knowledge and capabilities. For example,
Jain et al. (2023) likened fine-tuning in LLMs to merely modifying a "wrapper" around a stable, general-purpose set of latent capabilities. Even if they are generally inactive, harmful latent capabilities can pose harm if they resurface due to an attack, anomaly, or post-deployment modification (
Hendrycks et al., 2021,
Carlini et al., 2023).
We can frame the problem as such: There are hyper-astronomically many inputs for modern LLMs (e.g. there are vastly more 20-token strings than particles in the observable universe), so we can't brute-force-search over the input space to make sure they are safe.
So unless we are able to make provably safe advanced AI systems (we won't soon and probably never will), there will always be a challenge with ensuring safety - the gap between the set of failure modes that developers identify, and unforeseen ones that they don't.
This is a big challenge because of the inherent unknown-unknown nature of the problem. However, it is possible to try to infer how large this gap might be.
Taking a page from the safety engineering textbook -- when stakes are high, we should train and evaluate LLMs under threats that are at least as strong as, and ideally stronger than, ones that they will face in deployment.
First, imagine that an LLM is going to be deployed open-source (or if it could be leaked). Then, of course, the system's safety depends on what it can be modified to do. So it should be evaluated not as a black-box but as a general asset to malicious users who might enhance it through finetuning or other means. This seems obvious, but there's preced...
392 قسمت
ทุกตอน
×
1 AF - The Obliqueness Thesis by Jessica Taylor 30:04

1 AF - Secret Collusion: Will We Know When to Unplug AI? by schroederdewitt 57:38

1 AF - Estimating Tail Risk in Neural Networks by Jacob Hilton 41:11

1 AF - Can startups be impactful in AI safety? by Esben Kran 11:54

1 AF - How difficult is AI Alignment? by Samuel Dylan Martin 39:38

1 AF - Contra papers claiming superhuman AI forecasting by nikos 14:36

1 AF - AI forecasting bots incoming by Dan H 7:53

1 AF - Backdoors as an analogy for deceptive alignment by Jacob Hilton 14:45

1 AF - Conflating value alignment and intent alignment is causing confusion by Seth Herd 13:40

1 AF - Is there any rigorous work on using anthropic uncertainty to prevent situational awareness / deception? by David Scott Krueger 1:01

1 AF - The Checklist: What Succeeding at AI Safety Will Involve by Sam Bowman 35:25

1 AF - Survey: How Do Elite Chinese Students Feel About the Risks of AI? by Nick Corvino 19:38

1 AF - Can a Bayesian Oracle Prevent Harm from an Agent? (Bengio et al. 2024) by Matt MacDermott 8:04

1 AF - Epistemic states as a potential benign prior by Tamsin Leake 13:38

1 AF - AIS terminology proposal: standardize terms for probability ranges by Egg Syntax 5:24

1 AF - Solving adversarial attacks in computer vision as a baby version of general AI alignment by stanislavfort 12:34

1 AF - Would catching your AIs trying to escape convince AI developers to slow down or undeploy? by Buck Shlegeris 5:55

1 AF - Owain Evans on Situational Awareness and Out-of-Context Reasoning in LLMs by Michaël Trazzi 8:33

1 AF - Showing SAE Latents Are Not Atomic Using Meta-SAEs by Bart Bussmann 35:53

1 AF - Invitation to lead a project at AI Safety Camp (Virtual Edition, 2025) by Linda Linsefors 7:27

1 AF - Interoperable High Level Structures: Early Thoughts on Adjectives by johnswentworth 12:28

1 AF - A Robust Natural Latent Over A Mixed Distribution Is Natural Over The Distributions Which Were Mixed by johnswentworth 8:37

1 AF - Measuring Structure Development in Algorithmic Transformers by Jasmina Nasufi 18:17

1 AF - AGI Safety and Alignment at Google DeepMind: A Summary of Recent Work by Rohin Shah 16:31

1 AF - Finding Deception in Language Models by Esben Kran 7:36

1 AF - Limitations on Formal Verification for AI Safety by Andrew Dickson 37:37

1 AF - Clarifying alignment vs capabilities by Richard Ngo 13:26

1 AF - Untrustworthy models: a frame for scheming evaluations by Olli Järviniemi 15:38

1 AF - Calendar feature geometry in GPT-2 layer 8 residual stream SAEs by Patrick Leask 7:17

1 AF - Fields that I reference when thinking about AI takeover prevention by Buck Shlegeris 17:03
به Player FM خوش آمدید!
Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.