Looks like the publisher may have taken this series offline or changed its URL. Please contact support if you believe it should be working, the feed URL is invalid, or you have any other concerns about it.
با برنامه Player FM !
AF - Investigating the Ability of LLMs to Recognize Their Own Writing by Christopher Ackerman
بایگانی مجموعه ها ("فیدهای غیر فعال" status)
When? This feed was archived on October 23, 2024 10:10 (
Why? فیدهای غیر فعال status. سرورهای ما، برای یک دوره پایدار، قادر به بازیابی یک فید پادکست معتبر نبوده اند.
What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.
Manage episode 432026944 series 3337166
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Investigating the Ability of LLMs to Recognize Their Own Writing, published by Christopher Ackerman on July 30, 2024 on The AI Alignment Forum.
This post is an interim progress report on work being conducted as part of Berkeley's Supervised Program for Alignment Research (SPAR).
Summary of Key Points
We test the robustness of an open-source LLM's (Llama3-8b) ability to recognize its own outputs on a diverse mix of datasets, two different tasks (summarization and continuation), and two different presentation paradigms (paired and individual).
We are particularly interested in differentiating scenarios that would require a model to have specific knowledge of its own writing style from those where it can use superficial cues (e.g., length, formatting, prefatory words) in the text to pass self-recognition tests.
We find that while superficial text features are used when available, the RLHF'd Llama3-8b-Instruct chat model - but not the base Llama3-8b model - can reliably distinguish its own outputs from those of humans, and sometimes other models, even after controls for superficial cues: ~66-73% success rate across datasets in paired presentation and 58-83% in individual presentation (chance is 50%).
We further find that although perplexity would be a useful signal to perform the task in the paired presentation paradigm, correlations between relative text perplexity and choice probability are weak and inconsistent, indicating that the models do not rely on it.
Evidence suggests, but does not prove, that experience with its own outputs, acquired during post-training, is used by the chat model to succeed at the self-recognition task.
The model is unable to articulate convincing reasons for its judgments.
Introduction
It has recently been found that large language models of sufficient size can achieve above-chance performance in tasks that require them to discriminate their own writing from that of humans and other models. From the perspective of AI safety, this is a significant finding. Self-recognition can be seen as an instance of situational awareness, which has long been noted as a potential point of risk for AI (Cotra, 2021).
Such an ability might subserve an awareness of whether a model is in a training versus deployment environment, allowing it to hide its intentions and capabilities until it is freed from constraints. It might also allow a model to collude with other instances of itself, reserving certain information for when it knows it's talking to itself that it keeps secret when it knows it's talking to a human.
On the positive side, AI researchers could use a model's self-recognition ability as the basis to build resistance to malicious prompting. But what isn't clear from prior studies is whether the self-recognition task success actually entails a model's self-awareness of its own writing style.
Panickssery et al. (2024), utilizing a summary writing/recognition task, report that a number of LLMs, including Llama2-7b-chat, show out-of-the-box (without fine-tuning) self recognition abilities. However, this work focussed on the relationship between self-recognition task success and self-preference, rather than the specific means by which the model was succeeding at the task. Laine et al.
(2024), as part of a larger effort to provide a foundation for studying situational awareness in LLMs, utilized a more challenging text continuation writing/recognition task and demonstrate self-recognition abilities in several larger models (although not Llama2-7b-chat), but there the focus was on how task success could be elicited with different prompts and in different models.
Thus we seek to fill a gap in understanding what exactly models are doing when they succeed at a self recognition task.
We first demonstrate model self-recognition task success in a variety of domains....
392 قسمت
بایگانی مجموعه ها ("فیدهای غیر فعال" status)
When?
This feed was archived on October 23, 2024 10:10 (
Why? فیدهای غیر فعال status. سرورهای ما، برای یک دوره پایدار، قادر به بازیابی یک فید پادکست معتبر نبوده اند.
What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.
Manage episode 432026944 series 3337166
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Investigating the Ability of LLMs to Recognize Their Own Writing, published by Christopher Ackerman on July 30, 2024 on The AI Alignment Forum.
This post is an interim progress report on work being conducted as part of Berkeley's Supervised Program for Alignment Research (SPAR).
Summary of Key Points
We test the robustness of an open-source LLM's (Llama3-8b) ability to recognize its own outputs on a diverse mix of datasets, two different tasks (summarization and continuation), and two different presentation paradigms (paired and individual).
We are particularly interested in differentiating scenarios that would require a model to have specific knowledge of its own writing style from those where it can use superficial cues (e.g., length, formatting, prefatory words) in the text to pass self-recognition tests.
We find that while superficial text features are used when available, the RLHF'd Llama3-8b-Instruct chat model - but not the base Llama3-8b model - can reliably distinguish its own outputs from those of humans, and sometimes other models, even after controls for superficial cues: ~66-73% success rate across datasets in paired presentation and 58-83% in individual presentation (chance is 50%).
We further find that although perplexity would be a useful signal to perform the task in the paired presentation paradigm, correlations between relative text perplexity and choice probability are weak and inconsistent, indicating that the models do not rely on it.
Evidence suggests, but does not prove, that experience with its own outputs, acquired during post-training, is used by the chat model to succeed at the self-recognition task.
The model is unable to articulate convincing reasons for its judgments.
Introduction
It has recently been found that large language models of sufficient size can achieve above-chance performance in tasks that require them to discriminate their own writing from that of humans and other models. From the perspective of AI safety, this is a significant finding. Self-recognition can be seen as an instance of situational awareness, which has long been noted as a potential point of risk for AI (Cotra, 2021).
Such an ability might subserve an awareness of whether a model is in a training versus deployment environment, allowing it to hide its intentions and capabilities until it is freed from constraints. It might also allow a model to collude with other instances of itself, reserving certain information for when it knows it's talking to itself that it keeps secret when it knows it's talking to a human.
On the positive side, AI researchers could use a model's self-recognition ability as the basis to build resistance to malicious prompting. But what isn't clear from prior studies is whether the self-recognition task success actually entails a model's self-awareness of its own writing style.
Panickssery et al. (2024), utilizing a summary writing/recognition task, report that a number of LLMs, including Llama2-7b-chat, show out-of-the-box (without fine-tuning) self recognition abilities. However, this work focussed on the relationship between self-recognition task success and self-preference, rather than the specific means by which the model was succeeding at the task. Laine et al.
(2024), as part of a larger effort to provide a foundation for studying situational awareness in LLMs, utilized a more challenging text continuation writing/recognition task and demonstrate self-recognition abilities in several larger models (although not Llama2-7b-chat), but there the focus was on how task success could be elicited with different prompts and in different models.
Thus we seek to fill a gap in understanding what exactly models are doing when they succeed at a self recognition task.
We first demonstrate model self-recognition task success in a variety of domains....
392 قسمت
All episodes
×
1 AF - The Obliqueness Thesis by Jessica Taylor 30:04

1 AF - Secret Collusion: Will We Know When to Unplug AI? by schroederdewitt 57:38

1 AF - Estimating Tail Risk in Neural Networks by Jacob Hilton 41:11

1 AF - Can startups be impactful in AI safety? by Esben Kran 11:54

1 AF - How difficult is AI Alignment? by Samuel Dylan Martin 39:38

1 AF - Contra papers claiming superhuman AI forecasting by nikos 14:36

1 AF - AI forecasting bots incoming by Dan H 7:53

1 AF - Backdoors as an analogy for deceptive alignment by Jacob Hilton 14:45

1 AF - Conflating value alignment and intent alignment is causing confusion by Seth Herd 13:40

1 AF - Is there any rigorous work on using anthropic uncertainty to prevent situational awareness / deception? by David Scott Krueger 1:01

1 AF - The Checklist: What Succeeding at AI Safety Will Involve by Sam Bowman 35:25

1 AF - Survey: How Do Elite Chinese Students Feel About the Risks of AI? by Nick Corvino 19:38

1 AF - Can a Bayesian Oracle Prevent Harm from an Agent? (Bengio et al. 2024) by Matt MacDermott 8:04

1 AF - Epistemic states as a potential benign prior by Tamsin Leake 13:38

1 AF - AIS terminology proposal: standardize terms for probability ranges by Egg Syntax 5:24
به Player FM خوش آمدید!
Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.