XAI: on explainability and the obligation to explain

Explainable AI (XAI) raises several important epistemological and ethical questions, for instance regarding its epistemic goal. Proposals range from explainability to interpretability (Erasmus et al. 2020) or understandability (Páez 2019). Different techniques to achieve these goals, ranging from local approximations of complex models (Ribeiro et al. 2016) to visual (Xu et al. 2015) or textual explanations (Hendricks et al. 2016), are also discussed. The relation between goals and techniques is, however, also debated extensively. Furthermore, some argue the epistemic end only serves as a means to ultimately achieve an ethical end: Employing XAI helps to avoid discrimination (Krishnan 2020) or allows individuals to contest decisions (Wachter et al. 2018). 

What is commonly sidestepped, however, is the distinction between the ability to explain and the obligation to explain. Usually, it is assumed rather vaguely that an explanation should be provided for consequential decisions of AI systems (Lipton 2018), when their safe use is crucial (Doshi-Velez/Kim 2017), or in contexts in which the stakes are generally high as in the medical field (Watson et al. 2019). However, this leaves important questions unanswered: In which specific cases and under what specific circumstances do we have an obligation to give an explanation? And following from that: In which cases and under what circumstances is it necessary that an AI system provides an explanation? This omission is problematic on ethical and technical grounds because it remains unclear which characteristics of a situation establish an obligation to explain and how this obligation should be addressed technically. 

In our paper we shed new light on the relation between the ability to explain and the obligation to explain in the context of XAI by combining approaches from moral philosophy, political philosophy, and epistemology. After an introduction (§1), we distinguish different types of explanation and outline why explanations can be ethically relevant (§2). Building on Kantian and Neo-Kantian theories we then provide a framework that systematizes in which instances there is a (moral) obligation to explain (§3). The “right to justification” (Forst 2007) is one focus point which we will distinguish from a “right to explanation” as codified in the GDPR. We also look at instances in which individuals have a right to be informed without this amounting to a full-blown right to justification. Finally, we explain why there are also instances in which there is a (moral) obligation to explain though nobody holds a “right” to either explanation or justification. Our systematization reveals that there are certain cases in which there is no obligation to provide an explanation, and that it is sometimes even (morally) forbidden to do so (§4). The systematization developed will subsequently also help to clarify what kind of explanation is needed – if at all. Thus, the paper not only closes an important gap in clarifying when we need explainability at all, it also paves the way to formulate concrete suggestions to designers of XAI systems. We sketch these technical implications in our concluding remarks (§5).

Advertisement