Abstract |
Let’s start with a thought experiment. A patient is waiting in the clinic room for the diagnosis result to decide whether he needs brain surgery for his medical conditions. After SaMD processed, the result shows that the patient is classified into the high- risk group with 99.9% of death rates and needs brain surgery immediately. But the result is opposite to your diagnosis that the patient needs not the surgery. Will you, as a physician in this scenario, object the result that SaMD has made? Theoretically, Human should be the one who determines all the decisions and takes AI’s results for reference only, as the GDPR Article 22 presumes. But quite the opposite, AI’s result has greater influences on Human than we thought. In this paper, I explore the tension between AI’s decision and human decision from the Epistemological perspectives, i.e. to justify the reasons behind the positive human beliefs in AI. My conclusion is that positive human beliefs in AI are because we misidentified AI as a general technology, and only if we can recognize their differences correctly, then the requirement of “Human in the loop” in the GDPR Article 22 can have its meaning and function.
|