Digital Ventures

Back to blog

The Dream AI

DIGITAL VENTURES X CHAMP TEEPAGORN November 20, 2017 3:44 AM

1,029

In the book, Life 3.0, Max Tegmark, a professor at MIT Technology Institute, highlights issues that remind me of the future of mankind and AI. He pictures situations that are very much like the movies (wherein AI and robots rule the world and kill humans! – Although he mentions that this is unlikely). Some calls this the Utopia (when AI are smart enough to make human happy. They will divide us into districts of beliefs and needs such as religion, hedonism, and naturalism etc. Yet, some see it as the Dystopia (when nations use AI to fight wars like never seen before). The book mentions realistic scenarios that don’t have a definite ending but rather a continual learning process together.

What will AI be to humans? This may be the question of the century. On one end, some suggest that AI is nothing but mathematics formula and calculation. It merely is a tool similar to a highly advanced calculator. On the other hand, some see that AI will develop consciousness to the human level or even further. In the end, they will become the descendants of mankind.

This may be a far-fetched scene yet if we look to the near future, this question remains very significant. What is AI to human and what should they be?

Early 2017, Space10, an innovative institution of furniture giant, IKEA, created an online questionnaire to evaluate AI and how human feel about them. Space10 mentions that the questionnaire “creates discussions about AI” and “makes the future of AI development more democratic”.

After gathering information for 6 months, Space10 reveals a report consisting of over 12,000 respondents from 139 countries. The report has several interesting highlights.

  • A majority (73%) of the respondents want AI to be human-like. 85% mentions that they want AI to detect and respond to human feelings while 69% wand AI to reflect our values and the ways we view the world. This “human” trait poses further questions such as if AI or robots are human-like with thoughts and feelings, would we feel “guilty” that we own them? Also, does “ownership” of something so human-like similar to a master-slave relationship?

  • Should AI be “female” or “male”? This question emerged following several “smart assistants” like Siri, Alexa, and Cortana having female voices. Discussion regarding sexual inequality is also mentioned (ex. can female only be “helpers”?) while 45% of the respondents want AI to have none or neutral gender.

  • Should AI respond to human needs before being asked? 62% of the respondents say “yes”. When asked about the personality of AI, 45% say that AI should obey and assist while 28% say that they should be protective and “motherly” (take care of humans like a mother and child). Moreover, the remaining 27% say that AI should have their own thoughts and challenge human.

  • Should AI prevent humans from wrongdoings? 74% of respondents say yes while 26% oppose.

From the replies, we see that respondents want AI to have quite a contradicting personality. A majority of respondents want AI to have their own thoughts, be human-like as well as reflect human values and how we see the world (meaning that AI will have contradicting “thoughts and beliefs” like humans). Yet, although we want them to be human-like, we still need them to “obey and assist” while “responding to our needs before being asked” and “prevent us from wrongdoings”.

Makes me wonder, if we really have such AI, what would they be like? When we are about to make mistakes, how can they protect us without disobeying us? How can they forecast what we want when we only want them to assist? How can they obey us and be human-like at the same time? This AI must be one confusing and contradicting AI. We may need to ask ourselves, “where is the balance of obedience and the ability of AI to decide what is good for humans.”

Recently, Alexa’s developer from Amazon mentioned to Washington Post that a great challenge is finding the right answer for depressed individuals who want to commit suicide as some users bluntly ask Alexa how to commit suicide. They see that it is not appropriate for Alexa to reply using a Google search of how to commit suicide. Instead, they should provide suicide hotlines that can help prevent the incident. This is an example of how AI can “prevent wrongdoing” that “disobey” human. This may be a straightforward example and we need to consider future problems that are more complicated.

How should smart assistants reply when we ask them where to buy illegal drugs or how to get revenge? Should they remind us to choose healthy food and stop smoking?

Should AI “consider mutual benefits of mankind” more than the “benefits of a handful of human”? Should they consider these factors before replying?

Should they help before being asked or show us the right path when we are about to make mistakes?

What would you want AI to be?

 

Reference:

We asked the internet how they’d design AI, Space10 – https://medium.com/conversational-interfaces/we-asked-the-internet-how-theyd-design-ai-ea70a073a5f