Artwork

محتوای ارائه شده توسط Grant Larsen. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Grant Larsen یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal
Player FM - برنامه پادکست
با برنامه Player FM !

FIR 136: Artificial Intelligence Ethics - Where is The LINE ??

16:22
 
اشتراک گذاری
 

Manage episode 310655461 series 1410522
محتوای ارائه شده توسط Grant Larsen. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Grant Larsen یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

In this episode, we take a look at the question where is the line for artificial intelligence ethics?

Alright, everybody, welcome to another episode of click AI radio. So ethics, what are they? Well, you know, if we look at Google definition here, right moral principles that govern a person's behavior, or conduct or con or the contacting of an activity. Alright, so I'll give you an example here, right? Without going into some again, some of my business law classes that I took in college. Here's, here's an example. I was in college and my brother came into town to visit and said, Hey, let's go skiing. Oh, sorry. I said, my brother, I've got to work. He says, Hey, no, come on. I'm in town. Let's go skiing. So what did I do? I lied, I said to my boss, I don't feel well. And I went skiing. Well, to make matters worse, I'm up skiing, and I get hurt.

To make things even more embarrassing. As I come down off the hill being pulled by the ski patrol. They open up the sled there, the local TV news station was there interviewing and well, I showed up on the evening news. While my boss is watching the news. He sees me there. All right, what a way to learn a life lesson. So what I did was unethical. I don't know that I have to go deeper than that. That was unethical. That was 35 years ago. I sure hope I learned my lessons since then. That was a painful life lesson. He called me that night. And he said that grant, I hope you're feeling better now. I said, Oh, yeah, yeah, I was in pain. Alright, pain both physically as well as internally.

Alright, so now let's talk about since apparently some of us carbon based beings have struggled or do struggle with ethics? What does it mean to do this for artificial intelligence? Well, to do that, I borrowed four categories from Wikipedia, talking about this area of artificial intelligence, ethics, number one, category number one, bias in AI, Category Two, robot rights, category three threat to human dignity, and category four weaponization of AI. Alright, so let's take a look at each of these briefly. So as it relates to bias in AI, right, this is where, as us humans, as we build these models, these AI models, then our own biases can either intentionally or unintentionally get incorporated. And what that does is that drives downstream decision making that's bias in AI, we'll look a little bit further that in a moment, robot rights so this is of course, the idea that humans have while we've got you know, moral obligations, we have them or we'll have them to our machines as well. Might be somewhat akin to animal rights. So alright, that's robot rights. Number three, again, was threat to human dignity. And of course, this is in the areas where respect and care and compassion and another human attributes are needed, right, AI should not be used, certainly to replace replace people. So how do we help protect that? And then the fourth area weaponization of AI, this is of course, using AI in military combat scenarios or, or other kinds of scenarios like that.

Alright, so what I'm not going to talk about here, though, is and I think this is a separate episode, there's the concept of singularity, right? It's the notion that that we that some self improving AI becomes so powerful that humans can't stop it kind of like the movie eagle eye or iRobot with with Will Smith. Alright, so we're gonna park that one, put that over to the side. There is a nonprofit organization, however called Ready big breath, the partners the partnership of AI to benefit people and society is formed by Amazon, Google, Facebook, IBM, Microsoft, Apple, all the biggies. They're obviously looking at to develop the best practices in In AI ethics, all right, so you want, you might want to look them up, take, take a deeper view on them. Let's go a little bit deeper on each of these categories here for just a moment. So first of all bias in AI, as you've probably heard of this came out of toward data science.com. They, they pointed out an experience in 2019, where researchers found that an algorithm that had been used on over 200 million people in the US around hospitals, it was being used to predict the likelihood that someone needed extra medical care. And without going further into it, it ended up breaking it along the lines of race, black versus white, now comes to find out that what they discovered is that the data itself, the way in which was prepared, started to expose that sort of bias in the way it was being interpreted, and it translated into, you know, discriminate behavior.

So that was quite quite a painful lesson. The good news is, is that they looked into it and caught that if they hadn't have looked into it or caught that then that would have translated into obviously, even worse, long term behavior. Here's another example. This comes from Compass see Oh, MPa, s, this was the correctional Offender Management profiling for alternative sanctions, long sentence, there are long title. There's an algorithm there that was being used to predict the likelihood that a defendant would become a repeat offender in the in the correctional systems there. And of course, coming out of there it translated incorrectly that a certain class or race of people was going to translate into twice as many false positives for repeat offenders as others that weren't of that race. So again, I yeah, not good, right. Ai bias is what that's about. So the questions are, what are the best practices to preserve the fair balanced use of AI? And how do you vet your original assumptions to avoid these kinds of mistakes? I find oftentimes, as organizations get focused on their question, or the problem they want to solve, get so so dedicated to solving that, that often don't stop to step back and look back at those questions, challenging the original assumptions on that. Alright, so bias in AI. Definitely a challenge and very relevant to small to medium business owners today.

Alright, let's, let's take a look at robot rights. Alright, so this apparently came out around 2017 After an EU Parliament report, proposed reasonable approach to dealing with this right now. Here's the there's an article by an attorney that specialized in this from some of the research that I saw out of different reports some of it from did Jin No, comma. I said that wrong, as well as a CNBC report, both pointed out that there's this attorney that is specializing in business law and IP that argued for extending workplace protections to robots pointing out that people are kicking the robots, right that, you know, they're pushing them around, right? So for example, some are some of the stated look. And here's what's driving that argument by 2025. Some predict that robots and machines driven by AI are gonna are gonna perform half of all productive functions in the workplace. Holy smokes, I don't know, it's 2021, almost 2022? I don't know I've done enough AI right now to know, I don't know if that's really the case. That's half of all productive functions AI. Anyway, whether or not you believe that, when it's not clear, what is not clear is whether the robots will have any worker rights.

And again, they pointed out that people feel hostility to him will kick over robot police resources or knock down you know, delivery bots, things like that. I've certainly seen some of that. So. So the question is, to what degree do you see this raised in the minds of people and where is the line for robot rights? And what does that look like? I'm pretty sure I Robot is not a documentary. Alright, here we go. Let's move on to number three, though. threat to human dignity now. Now, this one is really very sensitive, obviously. It's the whole notion of human dignity, right? It's the fact that humans possess some intrinsic value that caused them or says, hey, you know, they're worthy of respect, regardless of age, ability, status, gender, ethnicity. I subscribe to that. That makes total sense to me. A few years ago, Google announced a system this is I think back in 20. 18 An AI system with human sounding voice interactions. And in this particular case, I'm going to play an interaction for you. It's interesting, you can get this off off of Google site. So this is their, their material. I'm just going to take a clip of it here and share this. And tell me tell me what you think this is kind of interesting. I'm gonna pause right here.

...

Okay, so I didn't play the whole thing, but pretty incredible how lifelike how human like that voice sound, and it was generated off of the Google duplex system. So the question is, did duplex reduce the human dignity? In this case of this? This assistant, right, who is trying to figure out scheduling for for the salon appointment? Right. So was the dignity reduced? And and of course, there are other variations of this, right? Where, where AI can then in behave, you know, acting behalf or as though it's a human? Where's the dignity for that for the people and let alone in, in a healthcare situation? Right? What would that look like? Those are difficult questions, certainly to answer as they go to this next category. weaponization of AI, right? Immediately in the minds of some there's this notion of, you know, the whole the whole singularity notion, right? That AI becomes in control, right? And people fantasize about that part, for sure. The US Department of Defense calls weaponized AI algorithmic warfare. That's, there's this interesting article on think ml.ai that discusses some of this. So there's a category of this warfare called lethal autonomous weapons systems, le Ws and the whole notion is that you know, autonomous weapons that can locate and identify and attack and, and kill human targets.

And again, this sounds like the Tom Cruise movie Oblivion, right. In fact, I saw recently, session on 60 minutes, they aired something discussing the US and Australia's relationship with China, and pointed at a company that was producing weaponized drones as part of that puzzle. So the question is a where, where does weaponization of AI break ethical boundaries? Right? Or by definition, if you use AI for weaponization? Is it inherently unethical? Or you know, quite frankly, if AI is being used in a head up display of an Air Force pilot to improve decision making, saving the pilots life? Is that unethical? Boy more questions, and then there are answers. This area's fascinating to me, because the world's changing so quickly in this area. There are many, of course challenging questions to be addressed. Obviously, I've only scratched the surface here. But I wanted to let you know that in a future episode coming up soon, I've invited some guests from some AI companies to further discuss this topic and bring in various viewpoints.

So in this particular episode here, I want to lay the groundwork for it. But to wrap up here, I wanted to bring it back to the premise of this podcast channel, which is, what does it mean to you as a as an owner as an SMB owner, right? Someone's trying to run your business and you're trying to apply AI in these four categories that I briefly touched? What does that mean to you today? Does it have immediate impact? My take on it is this. As it relates to bias in AI? I think that this one has the most immediate effect on you as a business owner today. And to help address it, it means we have to ensure that we're asking the right questions, not only the questions that we want to get answers to immediately for our business, but we need to be able to have the discipline to step back and evaluate the broader context in which we're pursuing it. It also means that we need to course prepare data, the data sets that are representative and are not skewed. So those are several things today that that can be done now, in the second category of robot rights. I don't think that that's a direct impact for right now.

But you know, in the future perhaps the third category, threat to human dignity. I think this is a challenging one. Now, you know, when we've got things like deep fake, and tools like duplex, and that was a 2018. And here we are, you know, three and a half, four years since then right? You can imagine how far this is. So with deep fake and duplex and other such technologies where deceit or potential deceit can grow. I think ascertaining identity is crucial to your business. And this will grow over time. There's already been instances of AI driven technologies that have faked out business executives. So I do believe this is an immediate one. And then weaponization of AI. It's not a direct impact now, but you can see it's growing one. It's something that we need to watch, obviously, and address some more. All right, everybody. Thanks for joining and until next time, brush up on your AI ethics so you can put your business and your customers in the right lane.

Thank you for joining Grant on ClickAI Radio. Don't forget to subscribe and leave feedback. And remember to download your free ebook, visit ClickAIRadio.com Now.

  continue reading

159 قسمت

Artwork
iconاشتراک گذاری
 
Manage episode 310655461 series 1410522
محتوای ارائه شده توسط Grant Larsen. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Grant Larsen یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

In this episode, we take a look at the question where is the line for artificial intelligence ethics?

Alright, everybody, welcome to another episode of click AI radio. So ethics, what are they? Well, you know, if we look at Google definition here, right moral principles that govern a person's behavior, or conduct or con or the contacting of an activity. Alright, so I'll give you an example here, right? Without going into some again, some of my business law classes that I took in college. Here's, here's an example. I was in college and my brother came into town to visit and said, Hey, let's go skiing. Oh, sorry. I said, my brother, I've got to work. He says, Hey, no, come on. I'm in town. Let's go skiing. So what did I do? I lied, I said to my boss, I don't feel well. And I went skiing. Well, to make matters worse, I'm up skiing, and I get hurt.

To make things even more embarrassing. As I come down off the hill being pulled by the ski patrol. They open up the sled there, the local TV news station was there interviewing and well, I showed up on the evening news. While my boss is watching the news. He sees me there. All right, what a way to learn a life lesson. So what I did was unethical. I don't know that I have to go deeper than that. That was unethical. That was 35 years ago. I sure hope I learned my lessons since then. That was a painful life lesson. He called me that night. And he said that grant, I hope you're feeling better now. I said, Oh, yeah, yeah, I was in pain. Alright, pain both physically as well as internally.

Alright, so now let's talk about since apparently some of us carbon based beings have struggled or do struggle with ethics? What does it mean to do this for artificial intelligence? Well, to do that, I borrowed four categories from Wikipedia, talking about this area of artificial intelligence, ethics, number one, category number one, bias in AI, Category Two, robot rights, category three threat to human dignity, and category four weaponization of AI. Alright, so let's take a look at each of these briefly. So as it relates to bias in AI, right, this is where, as us humans, as we build these models, these AI models, then our own biases can either intentionally or unintentionally get incorporated. And what that does is that drives downstream decision making that's bias in AI, we'll look a little bit further that in a moment, robot rights so this is of course, the idea that humans have while we've got you know, moral obligations, we have them or we'll have them to our machines as well. Might be somewhat akin to animal rights. So alright, that's robot rights. Number three, again, was threat to human dignity. And of course, this is in the areas where respect and care and compassion and another human attributes are needed, right, AI should not be used, certainly to replace replace people. So how do we help protect that? And then the fourth area weaponization of AI, this is of course, using AI in military combat scenarios or, or other kinds of scenarios like that.

Alright, so what I'm not going to talk about here, though, is and I think this is a separate episode, there's the concept of singularity, right? It's the notion that that we that some self improving AI becomes so powerful that humans can't stop it kind of like the movie eagle eye or iRobot with with Will Smith. Alright, so we're gonna park that one, put that over to the side. There is a nonprofit organization, however called Ready big breath, the partners the partnership of AI to benefit people and society is formed by Amazon, Google, Facebook, IBM, Microsoft, Apple, all the biggies. They're obviously looking at to develop the best practices in In AI ethics, all right, so you want, you might want to look them up, take, take a deeper view on them. Let's go a little bit deeper on each of these categories here for just a moment. So first of all bias in AI, as you've probably heard of this came out of toward data science.com. They, they pointed out an experience in 2019, where researchers found that an algorithm that had been used on over 200 million people in the US around hospitals, it was being used to predict the likelihood that someone needed extra medical care. And without going further into it, it ended up breaking it along the lines of race, black versus white, now comes to find out that what they discovered is that the data itself, the way in which was prepared, started to expose that sort of bias in the way it was being interpreted, and it translated into, you know, discriminate behavior.

So that was quite quite a painful lesson. The good news is, is that they looked into it and caught that if they hadn't have looked into it or caught that then that would have translated into obviously, even worse, long term behavior. Here's another example. This comes from Compass see Oh, MPa, s, this was the correctional Offender Management profiling for alternative sanctions, long sentence, there are long title. There's an algorithm there that was being used to predict the likelihood that a defendant would become a repeat offender in the in the correctional systems there. And of course, coming out of there it translated incorrectly that a certain class or race of people was going to translate into twice as many false positives for repeat offenders as others that weren't of that race. So again, I yeah, not good, right. Ai bias is what that's about. So the questions are, what are the best practices to preserve the fair balanced use of AI? And how do you vet your original assumptions to avoid these kinds of mistakes? I find oftentimes, as organizations get focused on their question, or the problem they want to solve, get so so dedicated to solving that, that often don't stop to step back and look back at those questions, challenging the original assumptions on that. Alright, so bias in AI. Definitely a challenge and very relevant to small to medium business owners today.

Alright, let's, let's take a look at robot rights. Alright, so this apparently came out around 2017 After an EU Parliament report, proposed reasonable approach to dealing with this right now. Here's the there's an article by an attorney that specialized in this from some of the research that I saw out of different reports some of it from did Jin No, comma. I said that wrong, as well as a CNBC report, both pointed out that there's this attorney that is specializing in business law and IP that argued for extending workplace protections to robots pointing out that people are kicking the robots, right that, you know, they're pushing them around, right? So for example, some are some of the stated look. And here's what's driving that argument by 2025. Some predict that robots and machines driven by AI are gonna are gonna perform half of all productive functions in the workplace. Holy smokes, I don't know, it's 2021, almost 2022? I don't know I've done enough AI right now to know, I don't know if that's really the case. That's half of all productive functions AI. Anyway, whether or not you believe that, when it's not clear, what is not clear is whether the robots will have any worker rights.

And again, they pointed out that people feel hostility to him will kick over robot police resources or knock down you know, delivery bots, things like that. I've certainly seen some of that. So. So the question is, to what degree do you see this raised in the minds of people and where is the line for robot rights? And what does that look like? I'm pretty sure I Robot is not a documentary. Alright, here we go. Let's move on to number three, though. threat to human dignity now. Now, this one is really very sensitive, obviously. It's the whole notion of human dignity, right? It's the fact that humans possess some intrinsic value that caused them or says, hey, you know, they're worthy of respect, regardless of age, ability, status, gender, ethnicity. I subscribe to that. That makes total sense to me. A few years ago, Google announced a system this is I think back in 20. 18 An AI system with human sounding voice interactions. And in this particular case, I'm going to play an interaction for you. It's interesting, you can get this off off of Google site. So this is their, their material. I'm just going to take a clip of it here and share this. And tell me tell me what you think this is kind of interesting. I'm gonna pause right here.

...

Okay, so I didn't play the whole thing, but pretty incredible how lifelike how human like that voice sound, and it was generated off of the Google duplex system. So the question is, did duplex reduce the human dignity? In this case of this? This assistant, right, who is trying to figure out scheduling for for the salon appointment? Right. So was the dignity reduced? And and of course, there are other variations of this, right? Where, where AI can then in behave, you know, acting behalf or as though it's a human? Where's the dignity for that for the people and let alone in, in a healthcare situation? Right? What would that look like? Those are difficult questions, certainly to answer as they go to this next category. weaponization of AI, right? Immediately in the minds of some there's this notion of, you know, the whole the whole singularity notion, right? That AI becomes in control, right? And people fantasize about that part, for sure. The US Department of Defense calls weaponized AI algorithmic warfare. That's, there's this interesting article on think ml.ai that discusses some of this. So there's a category of this warfare called lethal autonomous weapons systems, le Ws and the whole notion is that you know, autonomous weapons that can locate and identify and attack and, and kill human targets.

And again, this sounds like the Tom Cruise movie Oblivion, right. In fact, I saw recently, session on 60 minutes, they aired something discussing the US and Australia's relationship with China, and pointed at a company that was producing weaponized drones as part of that puzzle. So the question is a where, where does weaponization of AI break ethical boundaries? Right? Or by definition, if you use AI for weaponization? Is it inherently unethical? Or you know, quite frankly, if AI is being used in a head up display of an Air Force pilot to improve decision making, saving the pilots life? Is that unethical? Boy more questions, and then there are answers. This area's fascinating to me, because the world's changing so quickly in this area. There are many, of course challenging questions to be addressed. Obviously, I've only scratched the surface here. But I wanted to let you know that in a future episode coming up soon, I've invited some guests from some AI companies to further discuss this topic and bring in various viewpoints.

So in this particular episode here, I want to lay the groundwork for it. But to wrap up here, I wanted to bring it back to the premise of this podcast channel, which is, what does it mean to you as a as an owner as an SMB owner, right? Someone's trying to run your business and you're trying to apply AI in these four categories that I briefly touched? What does that mean to you today? Does it have immediate impact? My take on it is this. As it relates to bias in AI? I think that this one has the most immediate effect on you as a business owner today. And to help address it, it means we have to ensure that we're asking the right questions, not only the questions that we want to get answers to immediately for our business, but we need to be able to have the discipline to step back and evaluate the broader context in which we're pursuing it. It also means that we need to course prepare data, the data sets that are representative and are not skewed. So those are several things today that that can be done now, in the second category of robot rights. I don't think that that's a direct impact for right now.

But you know, in the future perhaps the third category, threat to human dignity. I think this is a challenging one. Now, you know, when we've got things like deep fake, and tools like duplex, and that was a 2018. And here we are, you know, three and a half, four years since then right? You can imagine how far this is. So with deep fake and duplex and other such technologies where deceit or potential deceit can grow. I think ascertaining identity is crucial to your business. And this will grow over time. There's already been instances of AI driven technologies that have faked out business executives. So I do believe this is an immediate one. And then weaponization of AI. It's not a direct impact now, but you can see it's growing one. It's something that we need to watch, obviously, and address some more. All right, everybody. Thanks for joining and until next time, brush up on your AI ethics so you can put your business and your customers in the right lane.

Thank you for joining Grant on ClickAI Radio. Don't forget to subscribe and leave feedback. And remember to download your free ebook, visit ClickAIRadio.com Now.

  continue reading

159 قسمت

Alle Folgen

×
 
Loading …

به Player FM خوش آمدید!

Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.

 

راهنمای مرجع سریع