Digital Ventures

Back to blog

Computer monitoring comments: AI and the new role of monitoring comments

DIGITAL VENTURES X CHAMP TEEPAGORN December 18, 2016 3:44 AM

5,659

It is impossible for you to use the social network yet have never met its ‘dark sides’. I believe just by a few scrolls down a Facebook page, you will see a war zone happening on the screen where each person fires rounds of emotional bullets to one another without caring if there are still truths or facts on this Earth.

The arguments which have gone online are likely to become more aggressive and make us more ‘radical’ by what is called ‘Online Disinhibition Effect’. (Not seeing each other makes each person deem another person as non-human, unconnected to the surroundings and exist only for the purpose of such argument.)

Besides the emotional war zone, the online world also has other ‘sick’ areas as you might have seen. Some people think it is fun to post photos of corpses and killings on social network. Someone also think it is fun to bully others and to disseminate false news.

The problem is who takes care of this?

Previously, it is true that the one to take care and delete or hide inappropriate comments is human being. A scoop by Wired Magazine reported that the ‘comment checkers’ of Facebook suffer mild psychological symptoms due to working on this job for too long. They are depressed because they are exposed to the worst sides of human beings for long time. It is not the kind of job that anyone would want to do if one can choose. Therefore, we ask ‘Why not inventing the Artificial Intelligence to do this job? So that we can have a wider coverage and speedier check without the need to suffer depression’. In the past, we have also seen improvements on this comment-monitoring system.

Google is one of the companies which believes that this problem can be solved by Artificial Intelligence. The Artificial Intelligence is developed again by a technology laboratory called Jigsaw (formerly Google Ideas), which is a unit in Google. They believe they almost found the ‘solution’ to this problem by developing the Artificial Intelligence which can ‘understand human language’ (yet still) called Conversation AI.

Initially, the monitoring system for inappropriate comments only runs by ‘catching’ simple keywords, similar to Pantip’s system. For example, if I type Google, the website will block ‘Goo’ and display only ‘ *gle ‘. Or in some case, depending on the setting, such comment is simply hidden.

Developing the Artificial Intelligence to better understand comments in human language will help transcend these old limitations. It will replace most of the comment administrator’s work. Nowadays, certain websites have employed the Conversation AI system to check inappropriate comments, such as the New York Times. And statistics shows that it can read up to 18 million comments per day, and steadily learns why such comments should be ‘banned’. For example, the comment is irrelevant (to the article), is a spam, sarcastic, rude, attacking the author of the article, attacking other comments, or directly attacking the New York Times. The Engineering Director of New York Times reported that this system reduces the work of human comment administrators for 50 - 80%.

However, it would be difficult to adapt the comment monitoring system of a closed website (such as New York Times) to general social network websites, which have more complex structure. And it is virtually impossible for the automatic comment monitoring system to replace 100% of the work of human being in the near future. Since doing so would mean that the system has to understand human language, including all the contexts occurred in the human world. To illustrate, certain insults are not direct and not clearly aimed to attack. Nonetheless, it will be an insult only when the insulter and the insulted understand the same context.

For instance, in the past US presidential election when Mr. Trump, a presidential candidate (now the President of the United States), declared a policy to ban immigrants, some tweets were sent to the Jewish reporter “This is how you’ll look like if Trump wins”. If such was verified by the Artificial Intelligence, it would not amount to an insult. Or even if I say that the tweet also attached a picture of ‘bread oven’, many people might still not realise that this is an insult referring to the tragedy of Jewish, who were gassed to death during Hitler’s reign (which is a serious insult amounting to a Hate Speech).

Another problem of screening comments by Artificial Intelligence is, as you might guess, it could make the internet too ‘clean’.

‘Clean’ here includes containing absolutely no rude words (which gives goosebumps to some of us, imagine a movie scene in which everyone speaks nicely, but actually backstabbing each other.) Or excluding the dissenting opinions. For example, if a company uses this automatic screening system and a person complains about the company (due to the dissatisfaction in product or service), this automatic screening system, which has no sympathy, could simply throw this comment to the bin, whereas it is impossible for us to find the culprit who disregards the freedom of expression (because it is not a human, but a computer!).

The adoption of this technology, therefore, comes with serious limitations. Of course, it can reduce the burden of human being. But it also makes us realise that for some work, human beings need to rely on human beings only, especially the work that relates to ‘humanity’.