Digital Ventures

Back to blog

AI Created its Own Language

DIGITAL VENTURES X CHAMP TEEPAGORN July 20, 2017 3:53 AM

3,527

A language is a communication tool but, at the same time, it also “limits” communication. Those who are polyglots know that, for example, when comparing Thai with English, there are several words in Thai that do not have an exact translation in English (ex. the word “Greng-Jai” or pronouns which show status). On the other hand, there are also numerous English words which need over 10 Thai words to explain its meaning (ex. the word “encapsulate”).

When languages are both tools and limitations, therefore, when AI develops its own “language”, it becomes quite a big deal. AI has developed several languages to communicate with each other. These languages are sometimes intentional but sometimes, they are unintentional or derived from an unexpected outcome.

An example is when a developer created 3 AIs, I’ll call them Alice, Bob, and Charlie. Alice was given a “task” to communicate with Bob without Charlie’s understanding. This means that Alice and Bob will need to encrypt the language to prevent Charlie from understanding it. The result is that Alice and Bob were able to communicate using an “encrypted” language without Charlie’s understanding. However, the problem was that the developers were not able to decode the language used by Alice and Bob as well (the developers fail to perform reverse engineering). Hence, Alice and Bob created a “new language” for their communication.

Another incident was when Google’s developer discovered that Google Translate (the function that we use and always complain that some translations are incorrect) has developed an “interlingua” for its internal usage. Developers at the Google Neural Machine Translation wondered whether if they teach the translation system to translate English to Korean (and vice versa) and English to Japanese (and vice versa), could the tool translate Korean to Japanese (and vice versa)?

Your answer maybe “yes”, they just need to simply resort to English as a bridge. However, the developers set a condition that what if English wasn’t used as a medium, would it be possible? Turns out, the AI was able to “understand the concept” to the level that they can link the two languages together without having to resort to another language before doing so.

Recently, the AI has developed a language once again, this time, on Facebook. Developers created 2 chatbots to communicate with each other, turns out, they invented a new language. The language they use is not different from our language. If you observe the sentences, it may seem like a bug or a program error. They would say sentences such as “I can I I everything else” or “balls have zero to me to me to me to me”. They would say this over and over again like a stuttering person or an unstable person.

However, the 2 chatbots started to “understand each other” (the repeated words or incorrect grammars have meanings). This incident was compared with the Cryptophasia cases wherein twins develop a language that only they will understand while other people don’t (not even the parents). When we connect an AI with an AI and program them to communicate for a certain task without determining “how” the results may be that they may communicate with a new language that we do not understand.

Similarly, a researcher from Georgia Institute of Technology collaborated with Carnegia Mellon and Virginia Tech to develop an experiment for 2 chatbots to communicate and explain the colors and shapes of objects. Turns out that the 2 chatbots generated new “words” or “ways” to communicate without the need for guidance from humans.

In the past, we may connect programs via API programmed by a human so that other human can understand and solve problems. However, in the world where AIs are communicating, humans may be the unnecessary factor which is disregarded from the equation. When AI communicate with a language that we can never decode, if problems occur, can we find “rationales” from contents that we cannot understand?