Digital Ventures

Back to blog

Ethics and AI

DIGITAL VENTURES X CHAMP TEEPAGORN December 27, 2018 11:47 AM

2,923

Do robots need to have ethics? Some may say “no” because they don’t even have a life or mind.

Let’s take a look at this story.

In 1967, Philippa Foot, also known as the grand dame of philosophy, conducted a thought experiment about a trolley when she was at Oxford University. It was used to analyze whether abortion was ethical.

 

 

Credit : th.wikipedia.org

 

In the beginning, Foot had several scenarios such as whether a pilot should head a crashing plane towards a big city or a village will fewer people or should a terrorist spare the lives of five or one men. However, the most familiar case goes something like this.

You’re walking along a train track one morning when you spot a moving object from afar. At first, you couldn’t tell what it was but as it moved closer, you saw a locomotive approaching. It was running at great speed, greater than anyone could expect it to be. Farther down are two tracks. The locomotive is heading for the five people in the path of the train, if you don’t act now, all five will surely die! There happens to be a lever next to you that would allow you to divert the train. The only problem is that if you pull the lever, the locomotive will head to the other track with one person standing on it. That person will surely die if you pull the lever.

 

Here come the billion-dollar question: Would you choose to not pull the lever and let the five workers die or choose to pull the lever to save five workers and “sacrifice” the one worker on the other track?

 

Normally, when asked the trolley question, most respondents (90% from the virtual experiment) will choose to “pull the lever” to save the five and sacrifice the one person (only if that one person is a stranger and not a friend or a lover because if we knew the person, we will not sacrifice them). On the other hand, 68% of philosophers will choose to pull the lever (8% choose not to and 24% didn’t reply or will choose other options) (what would you do?).

This is a combat between ethical concepts from two schools. One is the utilitarianism concept that tells us to do things that will generate the highest benefit (such benefit can be measured in different ways, in this case, is “saving most lives” thus, they will pull the lever). The other idea is the deontological which tells us that certain actions such as killing the innocent are always wrong (in this case is choosing to kill one person that seemed to have no idea about the incident).

Not long after Phillipa presented this set of questions, Judith Thompson, a philosopher at the Massachusetts Institute of Technology used Foot’s questions to develop two other situations. First is the aforementioned situation and second, is the “footbridge” incident. In the second scenario, the locomotive is also heading towards the five workers, but you are standing on a bridge with a fat man. If you push the man from the bridge onto the tracks, the locomotive will surely stop (because it will hit and kill him). Thus, this will save the five workers.

Thompson called this question, the “Trolley Problem”.

 

Credit: web.media.mit.edu

 

Today, programmers and philosophers are joining hands to build various trolley problem models. For instance, Nicholas Evans, a professor in philosophy at the University of Massachusetts Lowell, is collaborating with a team of philosophers and engineers to create an algorithm based on a variety of ethic theories. This is a project that is supported by the USA’s National Science Foundation. This study will not attempt to decide which ethical theory is correct (for example it will not say that the utilitarianism is right). Rather, it will simulate situations for AI developers and the public to use as a reference, so they know which ethical theory supports which type of decision.   

Scientists at MIT are also developing an online platform called the Moral Machine. It is used for people to choose which group they will hit from the two choices. Both groups will consist of a mixture of people. For example, group 1 may have children, mothers, and soldiers while group 2 will have adults, elders, and doctors. Researchers will use this data for future studies in machine learning.

In the actual world, there are also case studies regarding autonomous vehicles and the trolley problem. For instance, the German government has announced guidelines for autonomous vehicles to always value human life more than animal life. If both subjects are human, the car will choose to hit the person that will be least injured without considering their age, sex, or color (however, details of “the level of injury” weren’t announced so if it was to hit a child who will become disabled or hit an adult to death, would it hit the child?). Similarly, the USA Congress is also expecting to issue new measures soon.

The trolley problem, which once seemed unrelatable, is being reexamined due to the birth of technology.

This gives an “ethical structure” to a lifeless entity such as a robot.

Robots need to be ethical because it is a factor that is multiplying in value amid our life…the human life.