Articles View

ID 267
Authors KU Chang-Yun
Title When AIs Say Yes and I Say No
Title (translation)
Subtitle On the Tension between AI's Decision and Human's Decision from the Epistemological Perspectives
Subtitle (translation)
Keywords Artificial Intelligence, GDPR Article 22, Human in the Loop, Automated Decision-making
Keywords (translation)
Issue 2019/4
DOI https://doi.org/10.22503/inftars.XIX.2019.4.5
Abstract Let’s start with a thought experiment. A patient is waiting in the clinic room for the diagnosis result to decide whether he needs brain surgery for his medical conditions. After SaMD processed, the result shows that the patient is classified into the high- risk group with 99.9% of death rates and needs brain surgery immediately. But the result is opposite to your diagnosis that the patient needs not the surgery. Will you, as a physician in this scenario, object the result that SaMD has made? Theoretically, Human should be the one who determines all the decisions and takes AI’s results for reference only, as the GDPR Article 22 presumes. But quite the opposite, AI’s result has greater influences on Human than we thought. In this paper, I explore the tension between AI’s decision and human decision from the Epistemological perspectives, i.e. to justify the reasons behind the positive human beliefs in AI. My conclusion is that positive human beliefs in AI are because we misidentified AI as a general technology, and only if we can recognize their differences correctly, then the requirement of “Human in the loop” in the GDPR Article 22 can have its meaning and function.
Abstract (translation)
Language English
Pages 61-76
Column Tanulmányok