Artwork

محتوای ارائه شده توسط Felipe Flores. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Felipe Flores یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal
Player FM - برنامه پادکست
با برنامه Player FM !

#110 Explainable AI Methods for Structured Data

37:06
 
اشتراک گذاری
 

Manage episode 261932225 series 2310475
محتوای ارائه شده توسط Felipe Flores. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Felipe Flores یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

During this special episode, Felipe gives a presentation on explainable AI methods for structured data. First, Felipe talks about opening the black box. Algorithms can be both sexist and racist, even at massive companies like Google and Amazon. Removing bias in AI is a difficult problem. However, there are ways to overcome it. Where does the bias come from? The dirty secret is that the data is biased. The algorithm doesn’t decide to be biased, it learns to be biased from the data. In reality, AI puts a mirror on society. We have inherent sexism and racism in our society. AI is a tool that will help us eradicate these underlying issues in society. No one should be attacking the people that made the algorithms.

The data is a representation of the world. We use the explainable methods to interpret what is happening in the algorithms. Explainable methods include explainable algorithms and unexplainable algorithms. When we come across an unexplainable algorithm, we can hit them with a framework and try to make them more explainable. Then, Felipe explains decision trees using the Titanic. Start with a list of all the people who boarded the ship, then separate them by gender. Next, you can use your clear rules to find which passengers survived. The model will give you a good summary of all the data depending on the rules.

Felipe would come across people who said predictable algorithms need to be 99% accurate, or they are garbage. However, if you are predicting how a person will behave, the accuracy will be lower because no one can predict how someone will act. Then, Felipe explains LIME: Local Interpretable Model-Agnostic Explanations. Regardless of the approach, you can use LIME to understand the predictions of an individual person. Stay tuned as Felipe explains the random forest.

Enjoy the show!

We speak about:

[02:10] About Felipe

[04:00] Opening the black box

[07:20] Where does the bias come from?

[11:20] Making more transparent algorithms

[17:00] About decision trees

[19:45] Using interpretable models

[22:20] About LIME: Local Interpretable Model-Agnostic Explanations

[30:10] How to use a random forest

Resources:

#70 Making Black Box Models Explainable With Christoph Molnar – Interpretable Machine Learning Researcher

Quotes:

“The data represents the way that the world works.”

“With the rise of AI, we can choose how we want the world to be.”

“Sometimes, we have algorithms that are just 52% accurate.”

Thank you to our sponsors:

Fyrebox - Make Your Own Quiz!

We are RUBIX. - one of Australia’s leading pure data consulting companies delivering project outcomes for some of the world’s leading brands.

And as always, we appreciate your Reviews, Follows, Likes, Shares and Ratings. Thank you so much for listening. Enjoy the show!

  continue reading

268 قسمت

Artwork
iconاشتراک گذاری
 
Manage episode 261932225 series 2310475
محتوای ارائه شده توسط Felipe Flores. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Felipe Flores یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

During this special episode, Felipe gives a presentation on explainable AI methods for structured data. First, Felipe talks about opening the black box. Algorithms can be both sexist and racist, even at massive companies like Google and Amazon. Removing bias in AI is a difficult problem. However, there are ways to overcome it. Where does the bias come from? The dirty secret is that the data is biased. The algorithm doesn’t decide to be biased, it learns to be biased from the data. In reality, AI puts a mirror on society. We have inherent sexism and racism in our society. AI is a tool that will help us eradicate these underlying issues in society. No one should be attacking the people that made the algorithms.

The data is a representation of the world. We use the explainable methods to interpret what is happening in the algorithms. Explainable methods include explainable algorithms and unexplainable algorithms. When we come across an unexplainable algorithm, we can hit them with a framework and try to make them more explainable. Then, Felipe explains decision trees using the Titanic. Start with a list of all the people who boarded the ship, then separate them by gender. Next, you can use your clear rules to find which passengers survived. The model will give you a good summary of all the data depending on the rules.

Felipe would come across people who said predictable algorithms need to be 99% accurate, or they are garbage. However, if you are predicting how a person will behave, the accuracy will be lower because no one can predict how someone will act. Then, Felipe explains LIME: Local Interpretable Model-Agnostic Explanations. Regardless of the approach, you can use LIME to understand the predictions of an individual person. Stay tuned as Felipe explains the random forest.

Enjoy the show!

We speak about:

[02:10] About Felipe

[04:00] Opening the black box

[07:20] Where does the bias come from?

[11:20] Making more transparent algorithms

[17:00] About decision trees

[19:45] Using interpretable models

[22:20] About LIME: Local Interpretable Model-Agnostic Explanations

[30:10] How to use a random forest

Resources:

#70 Making Black Box Models Explainable With Christoph Molnar – Interpretable Machine Learning Researcher

Quotes:

“The data represents the way that the world works.”

“With the rise of AI, we can choose how we want the world to be.”

“Sometimes, we have algorithms that are just 52% accurate.”

Thank you to our sponsors:

Fyrebox - Make Your Own Quiz!

We are RUBIX. - one of Australia’s leading pure data consulting companies delivering project outcomes for some of the world’s leading brands.

And as always, we appreciate your Reviews, Follows, Likes, Shares and Ratings. Thank you so much for listening. Enjoy the show!

  continue reading

268 قسمت

כל הפרקים

×
 
Loading …

به Player FM خوش آمدید!

Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.

 

راهنمای مرجع سریع

در حین کاوش به این نمایش گوش دهید
پخش