Interviews with Anthropologists about their New Books Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/anthropology
…
continue reading
محتوای ارائه شده توسط Anthrocurious, LLC and LLC. تمام محتوای پادکست شامل قسمتها، گرافیکها و توضیحات پادکست مستقیماً توسط Anthrocurious, LLC and LLC یا شریک پلتفرم پادکست آنها آپلود و ارائه میشوند. اگر فکر میکنید شخصی بدون اجازه شما از اثر دارای حق نسخهبرداری شما استفاده میکند، میتوانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal
Player FM - برنامه پادکست
با برنامه Player FM !
با برنامه Player FM !
Developing Responsible AI with David Gray Widder and Dawn Nafus
Manage episode 364982665 series 2427584
محتوای ارائه شده توسط Anthrocurious, LLC and LLC. تمام محتوای پادکست شامل قسمتها، گرافیکها و توضیحات پادکست مستقیماً توسط Anthrocurious, LLC and LLC یا شریک پلتفرم پادکست آنها آپلود و ارائه میشوند. اگر فکر میکنید شخصی بدون اجازه شما از اثر دارای حق نسخهبرداری شما استفاده میکند، میتوانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal
Contemporary AI systems are typically created by many different people, each working on separate parts or “modules.” This can make it difficult to determine who is responsible for considering the ethical implications of an AI system as a whole — a problem compounded by the fact that many AI engineers already don’t consider it their job to ensure the AI systems they work on are ethical.
In their latest paper, “Dislocated Accountabilities in the AI Supply Chain: Modularity and Developers’ Notions of Responsibility,” technology ethics researcher David Gray Widder and research scientist Dawn Nafus attempt to better understand the multifaceted challenges of responsible AI development and implementation, exploring how responsible AI labor is currently divided and how it could be improved.
In this episode, David and Dawn join This Anthro Life host Adam Gamwell to talk about the AI “supply chain,” modularity in software development as both ideology and technical practice, how we might reimagine responsible AI, and more.
Show Highlights:
Links and Resources:
…
continue reading
In their latest paper, “Dislocated Accountabilities in the AI Supply Chain: Modularity and Developers’ Notions of Responsibility,” technology ethics researcher David Gray Widder and research scientist Dawn Nafus attempt to better understand the multifaceted challenges of responsible AI development and implementation, exploring how responsible AI labor is currently divided and how it could be improved.
In this episode, David and Dawn join This Anthro Life host Adam Gamwell to talk about the AI “supply chain,” modularity in software development as both ideology and technical practice, how we might reimagine responsible AI, and more.
Show Highlights:
- [03:51] How David and Dawn found themselves in the responsible AI space
- [09:04] Where and how responsible AI emerged
- [16:25] What the typical AI development process looks like and how developers see that process
- [18:28] The problem with “supply chain” thinking
- [23:37] Why modularity is epistemological
- [26:26] The significance of modularity in the typical AI development process
- [31:26] How computer scientists’ reactions to David and Dawn’s paper underscore modularity as a dominant ideology
- [37:57] What it is about AI that makes us rethink the typical development process
- [45:32] Whether the job of asking ethical questions gets “outsourced” to or siloed in the research department
- [49:12] Some of the problems with user research nowadays
- [56:05] David and Dawn’s takeaways from writing the paper
Links and Resources:
229 قسمت
Manage episode 364982665 series 2427584
محتوای ارائه شده توسط Anthrocurious, LLC and LLC. تمام محتوای پادکست شامل قسمتها، گرافیکها و توضیحات پادکست مستقیماً توسط Anthrocurious, LLC and LLC یا شریک پلتفرم پادکست آنها آپلود و ارائه میشوند. اگر فکر میکنید شخصی بدون اجازه شما از اثر دارای حق نسخهبرداری شما استفاده میکند، میتوانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal
Contemporary AI systems are typically created by many different people, each working on separate parts or “modules.” This can make it difficult to determine who is responsible for considering the ethical implications of an AI system as a whole — a problem compounded by the fact that many AI engineers already don’t consider it their job to ensure the AI systems they work on are ethical.
In their latest paper, “Dislocated Accountabilities in the AI Supply Chain: Modularity and Developers’ Notions of Responsibility,” technology ethics researcher David Gray Widder and research scientist Dawn Nafus attempt to better understand the multifaceted challenges of responsible AI development and implementation, exploring how responsible AI labor is currently divided and how it could be improved.
In this episode, David and Dawn join This Anthro Life host Adam Gamwell to talk about the AI “supply chain,” modularity in software development as both ideology and technical practice, how we might reimagine responsible AI, and more.
Show Highlights:
Links and Resources:
…
continue reading
In their latest paper, “Dislocated Accountabilities in the AI Supply Chain: Modularity and Developers’ Notions of Responsibility,” technology ethics researcher David Gray Widder and research scientist Dawn Nafus attempt to better understand the multifaceted challenges of responsible AI development and implementation, exploring how responsible AI labor is currently divided and how it could be improved.
In this episode, David and Dawn join This Anthro Life host Adam Gamwell to talk about the AI “supply chain,” modularity in software development as both ideology and technical practice, how we might reimagine responsible AI, and more.
Show Highlights:
- [03:51] How David and Dawn found themselves in the responsible AI space
- [09:04] Where and how responsible AI emerged
- [16:25] What the typical AI development process looks like and how developers see that process
- [18:28] The problem with “supply chain” thinking
- [23:37] Why modularity is epistemological
- [26:26] The significance of modularity in the typical AI development process
- [31:26] How computer scientists’ reactions to David and Dawn’s paper underscore modularity as a dominant ideology
- [37:57] What it is about AI that makes us rethink the typical development process
- [45:32] Whether the job of asking ethical questions gets “outsourced” to or siloed in the research department
- [49:12] Some of the problems with user research nowadays
- [56:05] David and Dawn’s takeaways from writing the paper
Links and Resources:
229 قسمت
All episodes
×به Player FM خوش آمدید!
Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.