What role does logical thinking play in the fiction we have read about robots/artificial intelligence? How prepared do you think ordinary people are to use logic effectively to live/work with AI? What should we do about this?
Logical thinking plays a crucial role when it comes to making or commanding robots and artificial intelligence. In “Runaround,” Powell and Donovan had to use logical thinking in order to save themselves as well as complete their task, while balancing between the demands of the Three Laws. This was necessary because of Donovan’s failure to remember that robots do not think and understand nuance like human beings do. In “Three Laws,” Dr. Hobbes is so puzzled with Iris’s murder of Mr. Won that she initially fails to remember that robots operate within a set of logical rules.
I think most people would be unable to use logic effectively with AI because most people are illogical with the technology we have now. It seems as though in the past few years, while technology has made great advances in medicine, engineering, and communications, it has also made us more reliant and less tolerant of errors in the technologies we use. Because of this, if AI would not work because of a person’s inability to think logically or critically, it would make them frustrated and unable to use the AI in a productive way. One solution would to create an AI that would be able to understand human nuance and assess dangers or conflict like a human would, but with this, the issue becomes the difficulty of creating an AI with the ability to understand others and think complexly. However, I think that creating an AI with these capabilities would be easier than instructing a whole group of people to learn how to be more logical or how to understand how AI and robots actually operate.
Hey, Andrea! I think you make an excellent point about our level of frustration with technology. In my own life, I’ve seen my parents yell at the GPS and their phones on multiple occasions. Thus, I think that you’re 100% right when it comes to our frustrations with technology hindering our abilities to logically use A.I.. When you mentioned the danger in creating an A.I.. that adapts to human nuance, it made me realize how we might have to adjust our own behaviors to function with A.I.. Even though this may not be a negative, we will have to be more logical and think about how a robot or A.I. might perceive things. While I don’t think that’s a negative, do you think the implications of this are beneficial to the way we think or detrimental?
LikeLike
Hi Andrea! I actually had the opposite outlook on a potential future between robots and humans! I think you may be underestimating the baseline intelligence level in the average human being. But then again, when I reminisce about all the stupid things that I have done in the past, I rethink my argument. Also, I didn’t take into consideration the other members in our community, those who may have a learning disability and other circumstances that may affect their interactions with robots. On second thought, it probably would be easier for us to teach robots than to teach humans.
LikeLike