Artwork

محتوای ارائه شده توسط BlueDot Impact. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط BlueDot Impact یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal
Player FM - برنامه پادکست
با برنامه Player FM !

Challenges in Evaluating AI Systems

22:33
 
اشتراک گذاری
 

Manage episode 424744792 series 3498845
محتوای ارائه شده توسط BlueDot Impact. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط BlueDot Impact یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

Most conversations around the societal impacts of artificial intelligence (AI) come down to discussing some quality of an AI system, such as its truthfulness, fairness, potential for misuse, and so on. We are able to talk about these characteristics because we can technically evaluate models for their performance in these areas. But what many people working inside and outside of AI don’t fully appreciate is how difficult it is to build robust and reliable model evaluations. Many of today’s existing evaluation suites are limited in their ability to serve as accurate indicators of model capabilities or safety.
At Anthropic, we spend a lot of time building evaluations to better understand our AI systems. We also use evaluations to improve our safety as an organization, as illustrated by our Responsible Scaling Policy. In doing so, we have grown to appreciate some of the ways in which developing and running evaluations can be challenging.

Here, we outline challenges that we have encountered while evaluating our own models to give readers a sense of what developing, implementing, and interpreting model evaluations looks like in practice.
Source:
https://www.anthropic.com/news/evaluating-ai-systems
Narrated for AI Safety Fundamentals by Perrin Walker

A podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website.

  continue reading

فصل ها

1. Challenges in Evaluating AI Systems (00:00:00)

2. Introduction (00:00:15)

3. Challenges (00:02:23)

4. The supposedly simple multiple-choice evaluation (00:02:25)

5. One size doesn't fit all when it comes to third-party evaluation frameworks (00:06:42)

6. The subjectivity of human evaluations (00:10:45)

7. The ouroboros of model-generated evaluations (00:15:29)

8. Preserving the objectivity of third-party audits while leveraging internal expertise (00:16:56)

9. Policy recommendations (00:18:44)

10. Conclusion (00:21:50)

83 قسمت

Artwork
iconاشتراک گذاری
 
Manage episode 424744792 series 3498845
محتوای ارائه شده توسط BlueDot Impact. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط BlueDot Impact یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

Most conversations around the societal impacts of artificial intelligence (AI) come down to discussing some quality of an AI system, such as its truthfulness, fairness, potential for misuse, and so on. We are able to talk about these characteristics because we can technically evaluate models for their performance in these areas. But what many people working inside and outside of AI don’t fully appreciate is how difficult it is to build robust and reliable model evaluations. Many of today’s existing evaluation suites are limited in their ability to serve as accurate indicators of model capabilities or safety.
At Anthropic, we spend a lot of time building evaluations to better understand our AI systems. We also use evaluations to improve our safety as an organization, as illustrated by our Responsible Scaling Policy. In doing so, we have grown to appreciate some of the ways in which developing and running evaluations can be challenging.

Here, we outline challenges that we have encountered while evaluating our own models to give readers a sense of what developing, implementing, and interpreting model evaluations looks like in practice.
Source:
https://www.anthropic.com/news/evaluating-ai-systems
Narrated for AI Safety Fundamentals by Perrin Walker

A podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website.

  continue reading

فصل ها

1. Challenges in Evaluating AI Systems (00:00:00)

2. Introduction (00:00:15)

3. Challenges (00:02:23)

4. The supposedly simple multiple-choice evaluation (00:02:25)

5. One size doesn't fit all when it comes to third-party evaluation frameworks (00:06:42)

6. The subjectivity of human evaluations (00:10:45)

7. The ouroboros of model-generated evaluations (00:15:29)

8. Preserving the objectivity of third-party audits while leveraging internal expertise (00:16:56)

9. Policy recommendations (00:18:44)

10. Conclusion (00:21:50)

83 قسمت

Semua episode

×
 
Loading …

به Player FM خوش آمدید!

Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.

 

راهنمای مرجع سریع