Artwork

محتوای ارائه شده توسط The Gradient. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط The Gradient یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal
Player FM - برنامه پادکست
با برنامه Player FM !

Sasha Luccioni: Connecting the Dots Between AI's Environmental and Social Impacts

1:03:07
 
اشتراک گذاری
 

Manage episode 413222548 series 2975159
محتوای ارائه شده توسط The Gradient. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط The Gradient یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

In episode 120 of The Gradient Podcast, Daniel Bashir speaks to Sasha Luccioni.

Sasha is the AI and Climate Lead at HuggingFace, where she spearheads research, consulting, and capacity-building to elevate the sustainability of AI systems. A founding member of Climate Change AI (CCAI) and a board member of Women in Machine Learning (WiML), Sasha is passionate about catalyzing impactful change, organizing events and serving as a mentor to under-represented minorities within the AI community.

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach Daniel at editor@thegradient.pub

Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (00:43) Sasha’s background

* (01:52) How Sasha became interested in sociotechnical work

* (03:08) Larger models and theory of change for AI/climate work

* (07:18) Quantifying emissions for ML systems

* (09:40) Aggregate inference vs training costs

* (10:22) Hardware and data center locations

* (15:10) More efficient hardware vs. bigger models — Jevons paradox

* (17:55) Uninformative experiments, takeaways for individual scientists, knowledge sharing, failure reports

* (27:10) Power Hungry Processing: systematic comparisons of ongoing inference costs

* (28:22) General vs. task-specific models

* (31:20) Architectures and efficiency

* (33:45) Sequence-to-sequence architectures vs. decoder-only

* (36:35) Hardware efficiency/utilization

* (37:52) Estimating the carbon footprint of Bloom and lifecycle assessment

* (40:50) Stable Bias

* (46:45) Understanding model biases and representations

* (52:07) Future work

* (53:45) Metaethical perspectives on benchmarking for AI ethics

* (54:30) “Moral benchmarks”

* (56:50) Reflecting on “ethicality” of systems

* (59:00) Transparency and ethics

* (1:00:05) Advice for picking research directions

* (1:02:58) Outro

Links:

* Sasha’s homepage and Twitter

* Papers read/discussed

* Climate Change / Carbon Emissions of AI Models

* Quantifying the Carbon Emissions of Machine Learning

* Power Hungry Processing: Watts Driving the Cost of AI Deployment?

* Tackling Climate Change with Machine Learning

* CodeCarbon

* Responsible AI

* Stable Bias: Analyzing Societal Representations in Diffusion Models

* Metaethical Perspectives on ‘Benchmarking’ AI Ethics

* Measuring Data

* Mind your Language (Model): Fact-Checking LLMs and their Role in NLP Research and Practice


Get full access to The Gradient at thegradientpub.substack.com/subscribe
  continue reading

129 قسمت

Artwork
iconاشتراک گذاری
 
Manage episode 413222548 series 2975159
محتوای ارائه شده توسط The Gradient. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط The Gradient یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

In episode 120 of The Gradient Podcast, Daniel Bashir speaks to Sasha Luccioni.

Sasha is the AI and Climate Lead at HuggingFace, where she spearheads research, consulting, and capacity-building to elevate the sustainability of AI systems. A founding member of Climate Change AI (CCAI) and a board member of Women in Machine Learning (WiML), Sasha is passionate about catalyzing impactful change, organizing events and serving as a mentor to under-represented minorities within the AI community.

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach Daniel at editor@thegradient.pub

Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (00:43) Sasha’s background

* (01:52) How Sasha became interested in sociotechnical work

* (03:08) Larger models and theory of change for AI/climate work

* (07:18) Quantifying emissions for ML systems

* (09:40) Aggregate inference vs training costs

* (10:22) Hardware and data center locations

* (15:10) More efficient hardware vs. bigger models — Jevons paradox

* (17:55) Uninformative experiments, takeaways for individual scientists, knowledge sharing, failure reports

* (27:10) Power Hungry Processing: systematic comparisons of ongoing inference costs

* (28:22) General vs. task-specific models

* (31:20) Architectures and efficiency

* (33:45) Sequence-to-sequence architectures vs. decoder-only

* (36:35) Hardware efficiency/utilization

* (37:52) Estimating the carbon footprint of Bloom and lifecycle assessment

* (40:50) Stable Bias

* (46:45) Understanding model biases and representations

* (52:07) Future work

* (53:45) Metaethical perspectives on benchmarking for AI ethics

* (54:30) “Moral benchmarks”

* (56:50) Reflecting on “ethicality” of systems

* (59:00) Transparency and ethics

* (1:00:05) Advice for picking research directions

* (1:02:58) Outro

Links:

* Sasha’s homepage and Twitter

* Papers read/discussed

* Climate Change / Carbon Emissions of AI Models

* Quantifying the Carbon Emissions of Machine Learning

* Power Hungry Processing: Watts Driving the Cost of AI Deployment?

* Tackling Climate Change with Machine Learning

* CodeCarbon

* Responsible AI

* Stable Bias: Analyzing Societal Representations in Diffusion Models

* Metaethical Perspectives on ‘Benchmarking’ AI Ethics

* Measuring Data

* Mind your Language (Model): Fact-Checking LLMs and their Role in NLP Research and Practice


Get full access to The Gradient at thegradientpub.substack.com/subscribe
  continue reading

129 قسمت

كل الحلقات

×
 
Loading …

به Player FM خوش آمدید!

Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.

 

راهنمای مرجع سریع