Digital Ventures

Back to blog

Are AI Selfish?

DIGITAL VENTURES X CHAMP TEEPAGORN March 15, 2017 4:16 AM

1,399

When discussing AI, we often refer to “intelligence” not associated with good or bad. This is because we think of AI as a non-living object like rocks or glasses of water. Thus, when we say AI are “selfish” or “altruistic”, it may not feel right.

Understanding AI at the advanced level (not systematically understood by human as it resembles the learning process of the human brain) is necessary. Especially when the society is gradually becoming more integrated and controlled by AI as experienced in traffic controls or in other complex, morale-related issues such as the juror.

DeepMind, Google’s specialized AI R&D department, has stepped in and wondered that if we leave AI to work together on a similar task, will they corporate or turn against each other to best achieve the task.

Many may have heard about the Prisoner’s Dilemma often used to explain situations wherein if one is selfish, they will gain a certain level of positive results. Yet if both are selfish, both will receive negative results. This is the concept based on Deepmind’s study on whether AI are “smart” enough to corporate or “too smart” and act against each other.

Deepmind’s experiment is rather compelling, they set up two mini games, one is called the Gathering where two AI compete in picking apples (digital apples). The scores are calculated from apples picked, however, the AI have a special option to zap the opponent to be temporarily out of the game, thus, allowing the other AI to freely pick apples.

This case does not concern the Prisoner’s Dilemma but rather concerns the Zero-sum game where when opponents gain lesser points, we gain more points.

In Gathering, when researchers programmed apples to be just enough, the AI do not zap their opponent but focus on apple picking. However, when the amount decreases, the zapping increases. Interestingly, the “smarter” AI choose to zap opponents regardless of the number of apples.

The researcher explains that the AI calculated that zapping requires more processing power, therefore, less complex AIs will think it is not a good option (because it leaves lesser time to pick apples). However, more complexed AI can quickly process the zapping making it the viable option.

The second game is called Wolfpack where two AIs hunt together in a setting with obstacles. Unlike Gathering, the scores are calculated when a prey is caught and points are also distributed to nearby players.

In Wolfpack, researchers observe that AI can corporate to gain the best scores. More interestingly, the smarter the AI, the higher the collaboration.

Researchers concluded that AI behavior depends on the motivation of the game. This means that when individual results are highlighted and corporation is not promoted, the smarter the AI, the more it will attack others and become selfish. However, if the setting promotes cooperation, AI are smart enough to work together. Additionally, future research may include studies about which type of environment can enhance corporation rather than competition for individual benefits.

The answer is that we must create motivation or evaluation criteria that “measures” corporation rather than individual outcomes.

Interestingly from this study, we learn that aside from using this idea to develop AI, we can also use it to plan the environment suitable for human development.