با برنامه Player FM !
Unpacking AI Bias: Impact, Detection, Prevention, and Policy; With Guest: Dr. Cari Miller, MBA, FHCA
Manage episode 360638060 series 3461851
What is AI bias and how does it impact both organizations and individual members of society? How does one detect if they’ve been impacted by AI bias? What can be done to prevent or mitigate it? Can AI/ML systems be audited for bias and, if so, how?
The MLSecOps Podcast explores these questions and more with guest Cari Miller, Founder of the Center for Inclusive Change and member of the For Humanity Board of Directors.
This week’s episode delves into the controversial topics of Trusted and Ethical AI within the realm of MLSecOps, offering insightful discussion and thoughtful perspectives. It also highlights the importance of continuing the conversation around AI bias and working toward creating more ethical and fair AI/ML systems.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
41 قسمت
Unpacking AI Bias: Impact, Detection, Prevention, and Policy; With Guest: Dr. Cari Miller, MBA, FHCA
Manage episode 360638060 series 3461851
What is AI bias and how does it impact both organizations and individual members of society? How does one detect if they’ve been impacted by AI bias? What can be done to prevent or mitigate it? Can AI/ML systems be audited for bias and, if so, how?
The MLSecOps Podcast explores these questions and more with guest Cari Miller, Founder of the Center for Inclusive Change and member of the For Humanity Board of Directors.
This week’s episode delves into the controversial topics of Trusted and Ethical AI within the realm of MLSecOps, offering insightful discussion and thoughtful perspectives. It also highlights the importance of continuing the conversation around AI bias and working toward creating more ethical and fair AI/ML systems.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
41 قسمت
همه قسمت ها
×به Player FM خوش آمدید!
Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.