Digital Ventures

Back to blog

Accountability and Transparency: Future AI’s qualifications

DIGITAL VENTURES February 07, 2019 10:52 AM

1,914

Digital Ventures has shared various stories about AI applications which showed how this technology is developing and moving closer to us. This has raised concerns about AI’s “accountability” which will help avoid any damages that may happen. What is “accountability” in AI? What will happen if AI lack “accountability”? and Why does AI need to learn about accountability? Today, Digital Ventures will discuss this issue.

Why is accountability significant?

AI applications are high-potential technology. It is capable of automated operations that can quickly analyze and synthesize data. Also, it is able to support other applications that directly affect human lives and our assets such as AI in automated vehicles. Therefore, numerous AI monitoring ideas have emerged. Existing regulation is the EU’s General Data Protection Regulation or GDPR. Moreover, another trending issue is teaching the algorithm to use and synthesize data with accountability. This concept is based on morals and ethical standards that humans are currently practicing.

Problems from the irresponsible use of AI

Irresponsible use of AI may intentionally happen by a human or may result from the lack of ethical regulations to controls AI’s decisions. Today, several tangible problems have risen from actual AI applications.

  • Widespread fake news. In this era, information flows faster because of network technology development. We receive more new data than ever before, and fake news may be embedded in these data. AI can filter and suggest news to both the local and global audiences, thus, fake news now spread wider and cause misunderstandings that lead to actual damages. Moreover, today’s AI is a mechanism that can alter data, which is called deep fake. Distorted information by deep fake comes in image, voice, and video forms. It is terrifying that a deep fake can produce almost flawless outcomes that seem real. A way to prevent deep fake from spreading is for AI to track the alteration and filter fake news out from the system.
  • Collecting biased data. We often use AI to manage big data and review trends to make better decisions. What would happen if AI receives biased data that causes biased trend analysis and other unethical mistakes? An actual and apparent example is the analysis of human features. Deloitte reported research about AI’s errors. When identifying human images, it showed extremely high errors due to factors regarding diversity in race and gender. The AI made errors in detecting colored female at 34% while the error for the white male was only 0.8%. Aside from being a sensitive issue, these errors will also delay the chance to apply AI to life and death matters such as the medical industry. This is because if AI fails to precisely identify human, various applications, especially in medical treatment, may not be efficient. 
  • Possibility of errors in autonomous vehicles. If AI lacks accountability, another problem that will surely trigger consequences is vehicle control. Similar to data bias, if the AI is unable to recognize human, living entities, or other assets, it may be a threat to all related living things. Additionally, there are the disputable issue of ethics in cases such as if a person jumps in front of an autonomous car, will the car swerve and save that person but killing the passenger or hit that person to save the passenger in the car? This also relates to the accountability of AI.

 

 

What attribute should AI have before we use them?

From the aforementioned reasons and problems, it is necessary for AI to abide by certain principles or attributes. This will help optimize AI usage, prevent damages, and lessen the damage to the lowest or controllable level. Large organizations who are developing digital morals such as the Institute of Electrical and Electronics Engineers or IEEE, UNI Global Union, and the European Economic and Social Committee are suggesting 2 main aspects as follow:

  • Accountability. AI operations such as suggestions, synthesizing, and data display must be beneficial and shall not lead to harmful or damages in a broad context. 
  • Transparency. In cases that AI can’t accomplish its task with accountability, a human should be able to correct the error. Thus, AI operations should be transparent. It should show the source of its decision and all related data so that corrections are precise.

 

As seen above, “accountability” is crucial in the future era where AI usage is required. Today, we have discussed the concepts, soon, we should find out more about the implementations. Follow us for more interesting updates on this topic.

Thank you for the information from

deloitte.com, designforvalues.tudelft.nl, and informationaccountability.org.