A topic that is much debated in times that AI is flourishing and embedded in all parts of the human life like today is the question of which skill should be practiced or which profession will exist when AI can replace human in so many tasks? Surely, there is no 100% correct answer but I was quite surprised by an idea from Prof. Art – Attapon Pamakho, a professor at the Department of English and Linguistics, Faculty of Humanities, Ramkhamhaeng University. He mentioned in my Podcast show that he foresees that a “philosopher” will be a necessary career in the future. That doesn’t seem like a good combination, right? How can philosophy be part of our preparation for robots? Let’s get to the bottom of this together.
AI is speedily advancing as humans have developed it to replace humans in many tasks. One of which is to help human make decisions, thus, it is definite that one day AI will think and decide by themselves. When that day comes, humans must prepare a way of thinking and a plan to tell AI that they should decide with which set of thoughts. Which moral shall we give AI for the decision-making process and who is responsible for the outcome from the decisions? A realistic example to make us better understand is the autonomous vehicle which was created to lessen road accidents from human errors. These accidents can be caused by physical impairments, tiredness, intoxication, dizziness, and also bad decision-making. As a result, we give computers the role of driving but surely there will be situations wherein computers will have to make decisions for humans. For instance, if the car is about to hit a pedestrian, should the autonomous vehicle be programmed to save the driver’s life and hit the pedestrian or shall it save the pedestrian and control the car to hit something else which means that the driver may be injured or risk dying. Moreover, is it possible for the technology to decide in an inevitable event as to whether hitting an elderly or a child in front of the car is more reasonable? The basic principle is that every life is as valuable as the other. However, in a utilitarian perspective, a computer may choose to protect the child as from its calculation the child has the higher probability for a longer life-span so they can contribute to the world for a longer period of time.
Frankly, this type of tough situation is also difficult for a human to make the best decision. Everyone has their own answer and this type of question will line up for never-ending debates. This is where philosophers will be needed to design a guideline for future AI. I wanted to ask a technology expert this same question, and no one is the perfect selection than Mr. Kent Walker, Senior Vice President for Global Affairs at Google who was recently in Thailand. He drafted the 7 AI development principles for Google, a world leader in AI development. Kent mentions that, when driving, human make millions of ethical decisions. For example, we need to decide how far away from the footpath shall we be or how far away should we be from the car in front to avoid accidents. As autonomous vehicles are equipped with numerous sensors, they can perform these tasks better than human. Simply speaking, his answer is that autonomous vehicles are designed to prevent road accidents caused by humans. Therefore, they will perform so well that we won’t have to debate on whose lives to save between the driver and the pedestrian because it will never happen. Kent’s idea is the common perspective for those with a positive attitude towards AI. This is no surprise because we are well aware that Google is supporting AI development for the benefit of mankind. On the other hand, there are also those who don’t see such a bright future. This group is rather skeptical and see AI negatively. Some familiar faces are Elon Musk, Bill Gates, and Stephen Hawking. They have all warned the world to be mindful about AI development because one day it may wipe out mankind.
The fear of the rise of AI can be explained with a phenomenon called the Uncanny Valley. This is used to define the fear of things that resemble human. This fear often ignites from seeing living things or items that look very much like a human but isn’t. This will trigger a familiar yet eerie feeling. An example is a fear when we see a doll that looks like a human or when we see our reflection in the mirror. I have encountered the Uncanny Valley experience. It was my first and most intense feeling that happened around 7 years ago when I traveled to Japan to interview Dr. Hiroshi Ishiguro. He is the creator of a robot that looks exactly like himself. The moment I walked in the door where his two robots were sitting down, I got goosebumps with no reason at all. This was probably because what was sitting in front of me looks completely like a human but is lifeless. I saw rigid eyes and dropping arms and legs that looked like a living thing with no muscle or nerves. However, once I was able to accept that this is a robot and not a human, the fear disappeared.
Therefore, there may be two solutions for us not to experience fear following thoughts that robots will soon be living among us. One, design robots to look exactly like a human in that even when up close one can’t tell them apart. Two, design robots to look totally different from a human. It can look like a seal, chicken, crow, corals, or anything, just not as a human. Coming back to the question about which ethics should AI possess, as of today, there is no clear answer. However, this is probably the time to start brainstorming for the roadmap for AI development in terms of ways of thinking, ethics, vision, or decision-making process. What are the types that we are we looking to upload into the program? We need to help each other lay this foundation early on by together setting the roadmap for AI development.
Who knows, planning for your child to major in philosophy may guarantee that when they graduate, they will be in demand for the labor market or will not be unemployed for long.