The conflict between explainable and accountable decision-making algorithms

When humans are decision-makers, they bear the responsibility for any decisions they make. The introduction of decision-making algorithms in the law , finance , medicine , and many other environments thus raises the question of who is responsible for similar decisions. Attributing responsibility for algorithmic decision-making is further complicated by a property common to most algorithms—their opacity. Algorithms are often black boxes that do not offer explanations for their determinations . The lack of information about how these systems work makes the search for a responsible entity difficult, if not impossible . This limitation has fueled calls for Explainable Artificial Intelligence (XAI), a field of inquiry with the epistemic goal of creating algorithms that can “make [their] functioning clear” and the normative aim of maintaining meaningful human control, thereby allowing responsibility to be traced back to designers, users, and patients (i.e., those subjected to algorithmic decision-making) . 

In this paper, we challenge the notion that XAI can be a panacea to the responsibility issues posed by decision-making algorithms. We focus our discussion on AI systems designed to make consequential decisions that can provide explanations after making a determination, i.e., those explainable in a post-hoc manner. While we agree with authors who defend that explainable systems are crucial for the accountable deployment of algorithmic decision-making, we argue that post-hoc explanations provided by XAI might conflict with the public perception of AI systems’ agency and blameworthiness. Furthermore, we highlight how such post-hoc explanations may fail to fulfill their epistemological goals of providing understandable and actionable information. This failure is aggravated by the power relation between designers and patients that allows the former to exploit XAI and shift responsibility to the latter. 

Taking Scanlon’s interpretation of blame, which poses it as a response to the reasons upon which an agent has acted , we argue that post-hoc explainable algorithms could be perceived as agents explaining the reasons behind their decisions and thus as blameworthy. This perception obscures the role of human agents in algorithmic decision-making and shapes laypeople’s judgments of responsibility, impacting policymaking, development, and adoption of algorithms . Motivated by the concern that developers could launder their agency for the deployment of autonomous systems and implement superficial ethical measures to escape regulation , we discuss how designers could exploit XAI to manipulate patients’ epistemic status by creating a false sense of understanding and control, creating moral and legal scapegoats. 

The paper contributes to the literature discussing the responsibility gaps posed by autonomous systems with a novel and critical perspective to XAI. We present how developers’ responsibilities should be highlighted during the development and deployment of decision-making algorithms. We discuss how existing regulatory proposals do not address the conflict between explainability and accountability and show that non-binding accountability frameworks may be subject to abuse due to the power that designers hold over explainable algorithms, allowing them to shift perceived responsibility. We conclude with a defense of hard regulation to achieve a just balance between accountability and power.