Digital Ventures

Back to blog

The Power of Forecasting: AI as the “semi fortune teller”



One of my most favorite books I read last year is called “Superforecasting” written by Daniel Gardner and Philip E. Tetlock. (By the time I am writing this, the book has not yet been translated into Thai language.)

Superforecasting is about the art and science of predicting the future. Well, many of you may think of a fortune teller and how to read the stars. In fact, however, Superforecasting is a very scientific book. Both writers wrote about the “The Good Judgement Project” which is about assessment of risks, situation and event that may occur in the future. This project is a competition of who would do best in making the nearest forecast based on currently available information.

The “accuracy” of forecasting can be scored, called Brier Score. The Brier Scores range from 0-2. The nearest to 0 means the forecast is very accurate. In case the forecast is 100% correct, then the Brier Score will be 0. In contrast, when the forecast is completely wrong, then the score is 2.

The project finally selected “Superforecasters” with Brier Score of 0.25. Other forecasters who did not join the Good Judgement Project generally have the Brier Score of 0.37. (This means project participants are more accurate.)

Risk exists in our everyday life. The more accurate we predict the risks, the higher success possibility we will enjoy.

After finishing “Superforecasting” course, I was thinking about what role and how important artificial intelligence (AI) is to forecast.

As you may know, most of the risk scoring today derived from calculation by risk management and financial companies (using some mathematic formula). Here, AI may drive forward the forecasting capability towards the future, through deep learning. Then the question is whether the risk assessment is more accurate and if yes, how much more.

May times, when wen evaluate AI capability, we usually compare it with human capability. Of course, AI can do as well as human being so as to optimize its benefits. (Well, AI that is not as good as human can be useful. It can be used as a “help”, not a “replacement”.) In this case, we should compare the forecast made by AI (through deep learning or whatever technique), with that made by human beings.

At present, no such comparison occurs. In similar industries, such as law (which needs information for analysis to assist court judgement to ensure fairness, and this is a good combination between “sense” and “pure numbers”), similar comparison has been observed.

Computer scientists have tried developing AI that can compare the decision on human right. Scientist train AI by feeding information of over 600 cases from European Court of Human Rights (ECHR). Once the training was done, it is observed that the AI can make 79% accurate judgement (meaning its judgement is the same as that done by court, which does not mean it is “correct”. This means that AI thinks similarly to human beings.)

The development of AI allows us to see the “pattern” of court ruling more easily. For example, we can see some factors which have strong influence on court judgement. In case of detention (e.g. detained persons do not receive food or good legal support), such cases are generally considered human right infringement. For cases related to punishment (e.g. sentence period), the decision is different. In addition, it is found that court ruling generally is based on “the fact” than “legal dispute”.

Although AI is very accurate, its developers are not convinced that the technology will be able to replace the judges. It is said that in the end, AI can “assist” judges as another piece of information. The judge, a human being, (who has good judgement) will make decision.

Ajay Agrawal, an economist at Toronto University, thinks similarly. He said machine intelligence is a substitute for human prediction and will assist human to make better decision. However, it cannot replace human.

Well, we will never know if this positive thinking will reflect the reality in the future. There is no reason for a company to hire a “human analyst” if AI can accurately do some small predictions more than human beings do.

In any decision that is largely involved human beings’ thinking process, such as in court, we may still need people who “feel” to make decision on a legal case. This may include decisions that involved high risks, which we cannot trust a machine to make a decision (although there is no reason to back up such no trust).

In other sectors, however, especially those involved playing around with numbers and routine decision making related to buy and sales, we cannot be sure that whether machine can “support” or “replace” human being decision making process.

That’s something for the future to unfold.

The future of forecasting cannot be forecasted. Funny, isn’t it?