DOI: https://dx.doi.org/10.22503/inftars.XXV.2025.4.2
Language: en
Title: On the Relationship between Transparency, Explainability and Trust in AI systems
Subtitle: a Conceptual Analysis
Abstract: This paper challenges the idea that transparency and explainability build trust in AI systems. We survey conflicting empirical evidence on the topic and then clarify the main concepts involved in the argument. Based on this conceptual clarification, we argue that transparency and explainability do not convey a complete understanding of how an AI system works, and are not relevant factors for building trust in AI systems. Accordingly, when the objective is to create trust in AI systems, transparency and explainability are neither necessary nor sufficient; therefore it is not rational to pursue them for this reason alone. We conclude that, while the results of Explainable Artificial Intelligence (XAI) may be useful for other reasons, it is both necessary and possible to build trust in AI systems through alternative approaches such as rigorous validation and sound institutional arrangements and practices.
Keywords: transparency, explainability, explainable AI, trust, artificial intelligence