Abstract |
There is an ongoing debate on how algorithms and machine learning can and should deal with human diversity while avoiding the pitfalls of statistical stereotyping, the re-enforcement of clichés and the perpetuation of unjust discrimination. Computer
scientists try to tackle these issues by developing algorithms and social-interaction protocols for mediating diversity-aware interactions between people, for instance on diversity-sensitive social platforms. At the same time, diversity-related data often comprise sensitive personal data, and their collection, storage and management increases the vulnerability of users to various misuse scenarios. Already this observation leads to the question, how do we need to conceptualize responsibility to do justice to the increased vulnerability? In this paper, I thus focus on the questions a diversity-sensitive social platform raises with regard to responsibility, and propose a tentative ethical framework of responsibility for these platforms.
|