با برنامه Player FM !
پادکست هایی که ارزش شنیدن دارند
حمایت شده


1 Professional football player Jonathan Jones: Mentorship and Making an Impact in Your Community 22:49
Navigating the Challenges of LLMs: Guardrails AI to the Rescue; With Guest: Shreya Rajpal
Manage episode 365414666 series 3461851
In “Navigating the Challenges of LLMs: Guardrails to the Rescue,” Protect AI Co-Founders, Daryan Dehghanpisheh and Badar Ahmed, interview the creator of Guardrails AI, Shreya Rajpal.
Guardrails AI is an open source package that allows users to add structure, type, and quality guarantees to the outputs of large language models (LLMs).
In this highly technical discussion, the group digs into Shreya’s inspiration for starting the Guardrails project, the challenges of building a deterministic “guardrail” system on top of probabilistic large language models, and the challenges in general (both technical and otherwise) that developers face when building applications for LLMs.
If you’re an engineer or developer in this space looking to integrate large language models into the applications you’re building, this episode is a must-listen and highlights important security considerations.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
50 قسمت
Manage episode 365414666 series 3461851
In “Navigating the Challenges of LLMs: Guardrails to the Rescue,” Protect AI Co-Founders, Daryan Dehghanpisheh and Badar Ahmed, interview the creator of Guardrails AI, Shreya Rajpal.
Guardrails AI is an open source package that allows users to add structure, type, and quality guarantees to the outputs of large language models (LLMs).
In this highly technical discussion, the group digs into Shreya’s inspiration for starting the Guardrails project, the challenges of building a deterministic “guardrail” system on top of probabilistic large language models, and the challenges in general (both technical and otherwise) that developers face when building applications for LLMs.
If you’re an engineer or developer in this space looking to integrate large language models into the applications you’re building, this episode is a must-listen and highlights important security considerations.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
50 قسمت
همه قسمت ها
×
1 Implementing Enterprise AI Governance: Balancing Ethics, Innovation & Risk for Business Success 38:39


1 Evaluating Real-World Adversarial ML Attack Risks and Effective Management: Robustness vs Non-ML Mitigations 41:19


1 Cybersecurity of Tomorrow: Exploring the Future of Security and Governance for AI Systems; With Guest: Martin Stanley, CISSP 39:45



1 A Holistic Approach to Understanding the AI Lifecycle and Securing ML Systems: Protecting AI Through People, Processes & Technology; With Guest: Rob van der Veer 29:25

1 ML Model Fairness: Measuring and Mitigating Algorithmic Disparities; With Guest: Nick Schmidt 35:33

1 Privacy Engineering: Safeguarding AI & ML Systems in a Data-Driven Era; With Guest Katharine Jarmul 46:44




1 Indirect Prompt Injections and Threat Modeling of LLM Applications; With Guest: Kai Greshake 36:14


1 ML Security: AI Incident Response Plans and Enterprise Risk Culture; With Guest: Patrick Hall 38:49

1 MLSecOps: Red Teaming, Threat Modeling, and Attack Methods of AI Apps; With Guest: Johann Rehberger 40:29

1 MITRE ATLAS: Defining the ML System Attack Chain and Need for MLSecOps; With Guest: Christina Liaghati, PhD 39:48

1 Unpacking AI Bias: Impact, Detection, Prevention, and Policy; With Guest: Dr. Cari Miller, MBA, FHCA 39:22



1 A Closer Look at "Securing AIML Systems in the Age of Information Warfare" With Guest: Disesdi Susanna Cox 30:50
به Player FM خوش آمدید!
Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.