Digital Ventures

Back to blog

Can AI learn “common sense”?

DIGITAL VENTURES X CHAMP TEEPAGORN March 31, 2017 2:42 PM

996

When talking about the word “common” in “common sense” when translated to Thai has a similar sense to the English vocabulary which means normal and ordinary. Also, both have connotations of the “common class or ordinary people”. Additionally, “common” in English holds a racist sense of “showing a lack of taste and refinement or vulgar”.

“Common sense” itself means “things commonly known” or “that everyone should normally know”. Yet the word “normal” is defined differently. For instance, wearing shoes in the house is normal in some cultures as well as normal gestures such as the “Wai” or respect for elders may not be normal in some cultures.

Therefore, when we strive to uncover “common sense” in AI it can become quite complex (when human common senses are not even 100% unified). At least until we discover the framework that can truly define “common sense”.

If you have read about AI, you may have come across the term “Turing test”, a simple experiment for AI by Alan Turing. In the test, human needs to “chat” (not face to face) in a program without knowing whether they are chatting with a human or computer program. If the program can fool human to believe that they are human, that program passes the experiment.

The problem with the Turing test is that there are several ways for the program to lure humans ex. replying unrelated answers, providing neutral answers, drifting from the correct answer, repeating the questions, pretending to incorrectly type or use a long time to process their thoughts. Therefore, people try to develop several experiments to test whether AI can “understand” or have “common sense” like human do (at least for humans who can speak English) and to what extent.

The recent Winograd Schema Challenge involves questions which, if answered by human common sense, would be simple. They are questions which are quite “neutral but vague” that may fool AI. Examples of the questions are “The trophy would not fit in the brown suitcase because it was too big. What was too big? (It is unclear what the word “it” refers to, however, with common sense, the reply would be that the trophy is too big.) or questions like “The town councilors gave the demonstrators a permit because they threaten to use violence. Who threatens to use violence? (It is unclear who the word “they” refers to. However, with common sense, the answer would be the demonstrators threatened to use violence, not the town councilors.)

These are questions that AI cannot respond to by using simple “sentence analysis” as they need to understand the relationship of things as well as the subject and object in the sentence. That is, they need to have enough “imagination” which relates to the situation in the sentence to be able to correctly reply.

Today, several AIs have tested in the Winograd Schema Challenge yet the AI with the best results only replied 48% and 45% correctly which has no difference from random answers.

How can AI with “no heart and soul” better reply to the questions? Well, we need to teach them “common sense”.

Yann LeCun, director of Facebook AI Research Group mentioned to the MIT Technology Review that the possible way to teach AI common sense is to provide “visual perception”. He states that there are developers who see AI relationship learning similar to infant’s development which is done by observation instead of written text.

“One of the things we really want to do is get machines to acquire the very large number of facts that represent the constraints of the real world just by observing it through video or other channels. That’s what would allow them to acquire common sense, in the end. These are things that animals and babies learn in the first few months of life—you learn a ridiculously large amount about the world just by observation. There are a lot of ways that machines are currently fooled easily because they have very narrow knowledge of the world.”

Now, if AI gains “common sense”, what’s next? Several experts predict that this will lead to the development of “general AI” that are trained to solve problems without being task-specific.

Think of HAL in 2001: Space Odyssey – That is how it will be.

Spooky yet thrilling.