با برنامه Player FM !
Ep. 131: How can we make automated systems team players?
Manage episode 503790180 series 2571262
The discussion centers on two key design principles: observability, which ensures humans can understand what automated systems are doing and why, and direct ability, which allows humans to steer automation rather than simply turning it on or off. Using examples from aviation incidents like Boeing's MCAS system and emerging AI technologies, the episode demonstrates how these 25-year-old principles remain relevant for contemporary automation challenges in safety-critical systems.
Discussion Points:
- (00:00) Background on automation and natural experiments in safety
- (04:58) Hard vs soft skills debate and limitations of binary thinking
- (08:12) Two common approaches to automation problems and their flaws
- (12:20) The substitution myth and why simple replacement doesn't work
- (17:25) Design principles for coordination, observability, and direct ability
- (24:33) Observability challenges with AI and machine learning systems
- (26:25) Direct ability and the problem of binary control options
- (30:47) Design implications and avoiding simplistic solutions
- (33:27) Practical takeaways for human automation coordination
- Like and follow, send us your comments and suggestions for future show topics!
Quotes:
Drew Rae: "The moment you divide it up and you just try to analyze the human behavior or analyze the automation, you lose the understanding of where the safety is coming from and what's necessary for it to be safe."
David Provan: "We actually don't think about that automation in the context of the overall system and all of the interfaces and everything like that. So we, we look at AI as AI and, you know, deploying. Introducing ai, but we don't do any kind of comprehensive analysis of, you know, what's gonna be all of the flow on implications and interfaces and potentially unintended consequences or the system, not necessarily just the technology or automation itself."
Drew Rae: "It's not enough for an expert system to just like constantly tell you all of the underlying rules that it's applying, that that doesn't really give you the right level of visibility as understanding what it thinks the current state is."
David Provan: "But I think this paper makes a really good argument, which is actually our automated system should be far more flexible than that. So I might be able to adjust, you know, it's functioning. If I know, if I, if I know enough about how it's functioning and why it's functioning, and I realize that the automation can't understand context and situation, then I should be able to make adjustments."
Drew Rae: "There's, there's gotta be ways of allowing all the animation to keep working, but to be able to. Retain control, and that's a really difficult design problem."
Resources:
132 قسمت
Manage episode 503790180 series 2571262
The discussion centers on two key design principles: observability, which ensures humans can understand what automated systems are doing and why, and direct ability, which allows humans to steer automation rather than simply turning it on or off. Using examples from aviation incidents like Boeing's MCAS system and emerging AI technologies, the episode demonstrates how these 25-year-old principles remain relevant for contemporary automation challenges in safety-critical systems.
Discussion Points:
- (00:00) Background on automation and natural experiments in safety
- (04:58) Hard vs soft skills debate and limitations of binary thinking
- (08:12) Two common approaches to automation problems and their flaws
- (12:20) The substitution myth and why simple replacement doesn't work
- (17:25) Design principles for coordination, observability, and direct ability
- (24:33) Observability challenges with AI and machine learning systems
- (26:25) Direct ability and the problem of binary control options
- (30:47) Design implications and avoiding simplistic solutions
- (33:27) Practical takeaways for human automation coordination
- Like and follow, send us your comments and suggestions for future show topics!
Quotes:
Drew Rae: "The moment you divide it up and you just try to analyze the human behavior or analyze the automation, you lose the understanding of where the safety is coming from and what's necessary for it to be safe."
David Provan: "We actually don't think about that automation in the context of the overall system and all of the interfaces and everything like that. So we, we look at AI as AI and, you know, deploying. Introducing ai, but we don't do any kind of comprehensive analysis of, you know, what's gonna be all of the flow on implications and interfaces and potentially unintended consequences or the system, not necessarily just the technology or automation itself."
Drew Rae: "It's not enough for an expert system to just like constantly tell you all of the underlying rules that it's applying, that that doesn't really give you the right level of visibility as understanding what it thinks the current state is."
David Provan: "But I think this paper makes a really good argument, which is actually our automated system should be far more flexible than that. So I might be able to adjust, you know, it's functioning. If I know, if I, if I know enough about how it's functioning and why it's functioning, and I realize that the automation can't understand context and situation, then I should be able to make adjustments."
Drew Rae: "There's, there's gotta be ways of allowing all the animation to keep working, but to be able to. Retain control, and that's a really difficult design problem."
Resources:
132 قسمت
همه قسمت ها
×به Player FM خوش آمدید!
Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.