56 subscribers
با برنامه Player FM !
پادکست هایی که ارزش شنیدن دارند
حمایت شده
How Explainable AI is Critical to Building Responsible AI // Krishna Gade MLOps // Meetup #53
Manage episode 313294466 series 3241972
MLOps community meetup #53! Last Wednesday we talked to Krishna Gade, CEO & Co-Founder, Fiddler AI.
// Abstract:
Training and deploying ML models have become relatively fast and cheap, but with the rise of ML use cases, more companies and practitioners face the challenge of building “Responsible AI.” One of the barriers they encounter is increasing transparency across the entire AI lifecycle to not only better understand predictions, but also to find problem drivers. In this session with Krishna Gade, we will discuss how to build AI responsibly, share examples from real-world scenarios and AI leaders across industries, and show how Explainable AI is becoming critical to building Responsible AI.
// Bio:
Krishna is the co-founder and CEO of Fiddler, an Explainable AI Monitoring company that helps address problems regarding bias, fairness and transparency in AI. Prior to founding Fiddler, Gade led the team that built Facebook’s explainability feature ‘Why am I seeing this?’. He’s an entrepreneur with a technical background with experience creating scalable platforms and expertise in converting data into intelligence. Having held senior engineering leadership roles at Facebook, Pinterest, Twitter, and Microsoft, he’s seen the effects that bias has on AI and machine learning decision-making processes, and with Fiddler, his goal is to enable enterprises across the globe solve this problem.
----------- Connect With Us ✌️-------------
Join our Slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Krishna on LinkedIn: https://www.linkedin.com/in/krishnagade/
Timestamps:
[00:00] Thank you Fiddler AI!
[01:04] Introduction to Krishna Gade
[03:19] Krisha's Background
[08:33] Everything was fine when you were doing it behind the scenes. But then when you put it out into the wild, we just lost our "baby." It's no longer under our control.
[08:53] "You want to have the assurance of how the system works. Even if it's working fine or if it's not working fine."
[09:37] What else is Explainability? Can you break that down for us?
[13:58] "Explainability becomes the cornerstone technology to have in place for you to build Responsible AI in production."
[14:48] For those used cases that aren't as high stakes, do you feel it's important? Is it up the foodchain?
[18:47] Can we dig into that used case real fast?
[22:01] If it is a human doing it, there's a lot more room for error? Bias or theories can be introduced and then they don't have a basis in reality?
[23:51] Do you need these subject matter experts or someone who is very advanced to be able to set up what the Explainability tool should be looking for at first is it that plug and play and it will know it latches on to the model?
[29:36] Does Explainable AI also entail Explainable Data. I see the point where Explainability can help with getting the insights about data after the model has been trained but should it be handled perhaps more proactively where you unbias the data before training the model on it?
[32:16] As a data scientist, there are situations when the prediction output is expected to support a business decision taken by senior executives. In that case, when the Explainable model gives out a prediction that doesn't align with the stakeholder's expectations, how should one navigate through this tricky situation?
[43:49] How are denen gram clustering for data explainability?
447 قسمت
Manage episode 313294466 series 3241972
MLOps community meetup #53! Last Wednesday we talked to Krishna Gade, CEO & Co-Founder, Fiddler AI.
// Abstract:
Training and deploying ML models have become relatively fast and cheap, but with the rise of ML use cases, more companies and practitioners face the challenge of building “Responsible AI.” One of the barriers they encounter is increasing transparency across the entire AI lifecycle to not only better understand predictions, but also to find problem drivers. In this session with Krishna Gade, we will discuss how to build AI responsibly, share examples from real-world scenarios and AI leaders across industries, and show how Explainable AI is becoming critical to building Responsible AI.
// Bio:
Krishna is the co-founder and CEO of Fiddler, an Explainable AI Monitoring company that helps address problems regarding bias, fairness and transparency in AI. Prior to founding Fiddler, Gade led the team that built Facebook’s explainability feature ‘Why am I seeing this?’. He’s an entrepreneur with a technical background with experience creating scalable platforms and expertise in converting data into intelligence. Having held senior engineering leadership roles at Facebook, Pinterest, Twitter, and Microsoft, he’s seen the effects that bias has on AI and machine learning decision-making processes, and with Fiddler, his goal is to enable enterprises across the globe solve this problem.
----------- Connect With Us ✌️-------------
Join our Slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Krishna on LinkedIn: https://www.linkedin.com/in/krishnagade/
Timestamps:
[00:00] Thank you Fiddler AI!
[01:04] Introduction to Krishna Gade
[03:19] Krisha's Background
[08:33] Everything was fine when you were doing it behind the scenes. But then when you put it out into the wild, we just lost our "baby." It's no longer under our control.
[08:53] "You want to have the assurance of how the system works. Even if it's working fine or if it's not working fine."
[09:37] What else is Explainability? Can you break that down for us?
[13:58] "Explainability becomes the cornerstone technology to have in place for you to build Responsible AI in production."
[14:48] For those used cases that aren't as high stakes, do you feel it's important? Is it up the foodchain?
[18:47] Can we dig into that used case real fast?
[22:01] If it is a human doing it, there's a lot more room for error? Bias or theories can be introduced and then they don't have a basis in reality?
[23:51] Do you need these subject matter experts or someone who is very advanced to be able to set up what the Explainability tool should be looking for at first is it that plug and play and it will know it latches on to the model?
[29:36] Does Explainable AI also entail Explainable Data. I see the point where Explainability can help with getting the insights about data after the model has been trained but should it be handled perhaps more proactively where you unbias the data before training the model on it?
[32:16] As a data scientist, there are situations when the prediction output is expected to support a business decision taken by senior executives. In that case, when the Explainable model gives out a prediction that doesn't align with the stakeholder's expectations, how should one navigate through this tricky situation?
[43:49] How are denen gram clustering for data explainability?
447 قسمت
همه قسمت ها
×
1 AI Reliability, Spark, Observability, SLAs and Starting an AI Infra Company 1:37:22

1 The Creator of FastAPI’s Next Chapter // Sebastián Ramírez // #324 1:09:37

1 A Candid Conversation Around MCP and A2A // Rahul Parundekar and Sam Partee // #316 SF Live 1:04:42

1 Making AI Reliable is the Greatest Challenge of the 2020s // Alon Bochman // #312 1:01:37

1 Behavior Modeling, Secondary AI Effects, Bias Reduction & Synthetic Data // Devansh Devansh // #311 1:01:35

1 GraphBI: Expanding Analytics to All Data Through the Combination of GenAI, Graph, & Visual Analytics // Paco Nathan & Weidong Yang // #310 1:14:01

1 I Am Once Again Asking "What is MLOps?" // Oleksandr Stasyk // #308 1:07:22

1 Agents of Innovation: AI-Powered Product Ideation with Synthetic Consumer Testing // Luca Fiaschi // #306 1:02:23

1 We're All Finetuning Incorrectly // Tanmay Chopra // #304 1:00:30


1 Navigating Machine Learning Careers: Insights from Meta to Consulting // Ilya Reznik // #286 1:00:36


1 Machine Learning, AI Agents, and Autonomy // Egor Kraev // #282 1:05:20


1 Unleashing Unconstrained News Knowledge Graphs to Combat Misinformation // Robert Caulk // #279 1:15:24

1 AI-Driven Code: Navigating Due Diligence & Transparency in MLOps // Matt van Itallie // #275 57:01


1 LLMs to agents: The Beauty & Perils of Investing in GenAI // VC Panel // Agents in Production 33:24


1 From Rules to Reasoning Engines // George Mathew // #296 1:05:26

1 GenAI Traffic: Why API Infrastructure Must Evolve... Again // Erica Hughberg // #296 1:06:24

1 Future of Software, Agents in the Enterprise, and Inception Stage Company Building // Eliot Durbin // #293 54:26

1 The Agent Landscape - Lessons Learned Putting Agents Into Production 1:08:40

1 Evolving Workflow Orchestration // Alex Milowski // #291 1:14:34


به Player FM خوش آمدید!
Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.