Artwork

محتوای ارائه شده توسط Chris Romeo and Robert Hurlbut, Chris Romeo, and Robert Hurlbut. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Chris Romeo and Robert Hurlbut, Chris Romeo, and Robert Hurlbut یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal
Player FM - برنامه پادکست
با برنامه Player FM !

Steve Wilson and Gavin Klondike -- OWASP Top Ten for LLM Release

51:43
 
اشتراک گذاری
 

Manage episode 381480976 series 2540720
محتوای ارائه شده توسط Chris Romeo and Robert Hurlbut, Chris Romeo, and Robert Hurlbut. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Chris Romeo and Robert Hurlbut, Chris Romeo, and Robert Hurlbut یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

Steve Wilson and Gavin Klondike are part of the core team for the OWASP Top 10 for Large Language Model Applications project. They join Robert and Chris to discuss the implementation and potential challenges of AI, and present the OWASP Top Ten for LLM version 1.0. Steve and Gavin provide insights into the issues of prompt injection, insecure output handling, training data poisoning, and others. Specifically, they emphasize the significance of understanding the risk of allowing excessive agency to LLMs and the role of secure plugin designs in mitigating vulnerabilities.
The conversation dives deep into the importance of secure supply chains in AI development, looking at the potential risks associated with downloading anonymous models from community-sharing platforms like Huggingface. The discussion also highlights the potential threat implications of hallucinations, where AI produces results based on what it thinks it's expected to produce and tends to please people, rather than generating factually accurate results.
Wilson and Klondike also discuss how certain standard programming principles, such as 'least privilege', can be applied to AI development. They encourage developers to conscientiously manage the extent of privileges they give to their models to avert discrepancies and miscommunications from excessive agency. They conclude the discussion with a forward-looking perspective on how the OWASP Top Ten for LLM Applications will develop in the future.
Links:
OWASP Top Ten for LLM Applications project homepage:
https://owasp.org/www-project-top-10-for-large-language-model-applications/
OWASP Top Ten for LLM Applications summary PDF:
https://owasp.org/www-project-top-10-for-large-language-model-applications/assets/PDF/OWASP-Top-10-for-LLMs-2023-slides-v1_1.pdf
FOLLOW OUR SOCIAL MEDIA:

➜Twitter: @AppSecPodcast
➜LinkedIn: The Application Security Podcast
➜YouTube: https://www.youtube.com/@ApplicationSecurityPodcast

Thanks for Listening!

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

  continue reading

فصل ها

1. Introduction (00:00:00)

2. A Threat Model for LLMs (00:01:37)

3. 1 - Prompt Injection (00:05:39)

4. 2 - Insecure Output Handling (00:09:00)

5. 3 - Training Data Poisoning (00:11:35)

6. 4 - Denial of Service (00:15:14)

7. 5 - Suply Chain Vulnerabilities (00:19:11)

8. 6 - Sensitive Information Disclosure (00:28:16)

9. 7 - Insecure Plugin Design (00:33:28)

10. 8 - Excessive Agency (00:38:27)

11. 9 - Overreliance (00:42:39)

12. 10 - Model Theft (00:46:31)

13. The Next Release (00:49:56)

295 قسمت

Artwork
iconاشتراک گذاری
 
Manage episode 381480976 series 2540720
محتوای ارائه شده توسط Chris Romeo and Robert Hurlbut, Chris Romeo, and Robert Hurlbut. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Chris Romeo and Robert Hurlbut, Chris Romeo, and Robert Hurlbut یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

Steve Wilson and Gavin Klondike are part of the core team for the OWASP Top 10 for Large Language Model Applications project. They join Robert and Chris to discuss the implementation and potential challenges of AI, and present the OWASP Top Ten for LLM version 1.0. Steve and Gavin provide insights into the issues of prompt injection, insecure output handling, training data poisoning, and others. Specifically, they emphasize the significance of understanding the risk of allowing excessive agency to LLMs and the role of secure plugin designs in mitigating vulnerabilities.
The conversation dives deep into the importance of secure supply chains in AI development, looking at the potential risks associated with downloading anonymous models from community-sharing platforms like Huggingface. The discussion also highlights the potential threat implications of hallucinations, where AI produces results based on what it thinks it's expected to produce and tends to please people, rather than generating factually accurate results.
Wilson and Klondike also discuss how certain standard programming principles, such as 'least privilege', can be applied to AI development. They encourage developers to conscientiously manage the extent of privileges they give to their models to avert discrepancies and miscommunications from excessive agency. They conclude the discussion with a forward-looking perspective on how the OWASP Top Ten for LLM Applications will develop in the future.
Links:
OWASP Top Ten for LLM Applications project homepage:
https://owasp.org/www-project-top-10-for-large-language-model-applications/
OWASP Top Ten for LLM Applications summary PDF:
https://owasp.org/www-project-top-10-for-large-language-model-applications/assets/PDF/OWASP-Top-10-for-LLMs-2023-slides-v1_1.pdf
FOLLOW OUR SOCIAL MEDIA:

➜Twitter: @AppSecPodcast
➜LinkedIn: The Application Security Podcast
➜YouTube: https://www.youtube.com/@ApplicationSecurityPodcast

Thanks for Listening!

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

  continue reading

فصل ها

1. Introduction (00:00:00)

2. A Threat Model for LLMs (00:01:37)

3. 1 - Prompt Injection (00:05:39)

4. 2 - Insecure Output Handling (00:09:00)

5. 3 - Training Data Poisoning (00:11:35)

6. 4 - Denial of Service (00:15:14)

7. 5 - Suply Chain Vulnerabilities (00:19:11)

8. 6 - Sensitive Information Disclosure (00:28:16)

9. 7 - Insecure Plugin Design (00:33:28)

10. 8 - Excessive Agency (00:38:27)

11. 9 - Overreliance (00:42:39)

12. 10 - Model Theft (00:46:31)

13. The Next Release (00:49:56)

295 قسمت

Όλα τα επεισόδια

×
 
Loading …

به Player FM خوش آمدید!

Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.

 

راهنمای مرجع سریع

در حین کاوش به این نمایش گوش دهید
پخش