Digital Ventures

Back to blog

The AI prophet

DIGITAL VENTURES X CHAMP TEEPAGORN July 26, 2018 11:00 AM

3,909

Human has always wanted to know the future, however, the uncertainty of things hinders us from this ability. We live in a three-dimension world and we can’t step out to explore the fourth-dimension. 

Nevertheless, we have some ability to forecast “patterned situations” with the use of surrounding information. For example, you can try to guess what I’ll do next, yet without additional information, you may shake your head and say, “how would I know?”.  However, if I provide more information such as I am walking to the ticket machine, I am taking out my wallet, I am taking out a 100 banknote. Then, I stop to ask, what do you think I’ll do next? You may easily guess that I am going to put the banknote into the machine to purchase a ticket and walk past the gate to the subway.

“You may be traveling.” You reply, and it may end there. However, if I add more information and let you view a VDO clip of all the actions I just described. Then, you may observe the surrounding which will lead to another conclusion. You may say “from what you are wearing, your bag, and the time, you may be on your way to work.”

This capability is not complicated for a human. From the initial and surrounding information, we are able to use rational or experience to deduce the outcome using common sense. Although the assumption may not always be correct. (Maybe instead of putting the banknote into the ticket machine, I change my mind and use the banknote to buy candy at the convenient store. This would be rather difficult to predict.) Nevertheless, such common senses work well enough for us to carry on our normal everyday lives.

Credit: Thinkstock

Nonetheless, this type of ability is not simple for a computer software.

This is the task for researchers at the University of Bonn in Germany. They “teach” AI to predict the next action of human in a research called “When will you do what? – Anticipating Temporal Occurrences of Activities”. The initial information used for the computers to learn are not complex. They are the VDO of people making breakfast and salads.

In the breakfast VDO, researchers categorized actions into portions such as “break the egg” “put into the pan” “stir the dough” “cook the pancake” etc. The computers will view the set of VDO to learn the actions. In the salad VDO, the actions are also split into portions (such as “cut the cheese” cut the cucumber”). Note that the AI can learn the actions in the VDO at real time with high precision. However, the researchers are trying to teach the AI to predict the next action.  

Cretdit: arxiv.org

The researchers use two methods. Firstly, they let the computer predict the next action and verify before predicting again. Secondly, they use a matrix to calculate the probability. They found that, after the training, AI can predict the future (of what the person in the VDO will do next) with 40% precision at a short range (ex. in the next 40 seconds). However, in a further future (ex. in the next 3 minutes), the precision reduces to 15%.

This number may sound low that in 100 predictions, only 15 is correct. Yet if we review the human common sense, we find that this precision is quite similar. That is, a human may also find it difficult to guess what a person will do in the next 3 minutes (while it is much simpler to guess what a person will do in the next 15-30 seconds). Nonetheless, this is only the beginning of the prediction system development and it may serve as a foundation for future research.

If it is developed to a certain level of precision, this type of system may be beneficial to human workers in the industrial sector (it can tell what the workers will do next, so they can help prepare equipment or materials beforehand). Also, it may be used in autonomous vehicles (it can guess whether car or motorcycle in front will turn or brake) and even patrol abnormalities in public areas (it knows what general people would do and if an abnormal behavior is detected, it can record the incident for further surveillance).

This advancement is both exciting and rather terrifying.

Reference: https://www.youtube.com/watch?v=xMNYRcVH_oI, https://arxiv.org