14 subscribers
با برنامه Player FM !
پادکست هایی که ارزش شنیدن دارند
حمایت شده
MITRE ATLAS: Defining the ML System Attack Chain and Need for MLSecOps; With Guest: Christina Liaghati, PhD
Manage episode 361106272 series 3461851
This week The MLSecOps Podcast talks with Dr. Christina Liaghati, AI Strategy Execution & Operations Manager of the AI & Autonomy Innovation Center at MITRE.
Chris King, Head of Product at Protect AI, guest-hosts with regular co-host D Dehghanpisheh this week. D and Chris discuss various AI and machine learning security topics with Dr. Liaghati, including the contrasts between the MITRE ATT&CK matrices focused on traditional cybersecurity, and the newer AI-focused MITRE ATLAS matrix.
The group also dives into consideration of new classifications of ML attacks related to large language models, ATLAS case studies, security practices such as ML red teaming; and integrating security into MLOps.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
58 قسمت
Manage episode 361106272 series 3461851
This week The MLSecOps Podcast talks with Dr. Christina Liaghati, AI Strategy Execution & Operations Manager of the AI & Autonomy Innovation Center at MITRE.
Chris King, Head of Product at Protect AI, guest-hosts with regular co-host D Dehghanpisheh this week. D and Chris discuss various AI and machine learning security topics with Dr. Liaghati, including the contrasts between the MITRE ATT&CK matrices focused on traditional cybersecurity, and the newer AI-focused MITRE ATLAS matrix.
The group also dives into consideration of new classifications of ML attacks related to large language models, ATLAS case studies, security practices such as ML red teaming; and integrating security into MLOps.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
58 قسمت
همه قسمت ها
×

1 Implementing Enterprise AI Governance: Balancing Ethics, Innovation & Risk for Business Success 38:39



1 Evaluating Real-World Adversarial ML Attack Risks and Effective Management: Robustness vs Non-ML Mitigations 41:19


1 Cybersecurity of Tomorrow: Exploring the Future of Security and Governance for AI Systems; With Guest: Martin Stanley, CISSP 39:45



1 A Holistic Approach to Understanding the AI Lifecycle and Securing ML Systems: Protecting AI Through People, Processes & Technology; With Guest: Rob van der Veer 29:25

1 ML Model Fairness: Measuring and Mitigating Algorithmic Disparities; With Guest: Nick Schmidt 35:33

1 Privacy Engineering: Safeguarding AI & ML Systems in a Data-Driven Era; With Guest Katharine Jarmul 46:44
به Player FM خوش آمدید!
Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.