Many non-epistemic values for the design of responsible AI, such as transparency, diversity, non-discrimination, fairness, and societal well-being depend on the quality of the output of algorithmic knowledge production aswell as a meaningful epistemic milieu between technical system and the human user. Hence, epistemic values such as understanding, robustness and maturity of the generated knowledge as well as the epistemic sovereignty of the human user become ethical in the design of responsible explainable AI (rXAI). In my contribution, I am going to present a conceptual framework which brings together the epistemic status of the technical system and of the human user to derive recommendations for the design of rXAI.
To frame algorithmic knowledge production, I suggest applying the concept of the epistemic tool (Boon & Knuuttila 2009) to XAI technologies. This allows to investigate the impact of XAI technologies on epistemic and ethical values from two perspectives: first, in the making of the tool, and second, in the application of the tool.
In the process of the making (design, implementation, testing), the AI technology becomes a knowledge repository. I will argue that the quality of the generation of this knowledge is essential for rXAI technology, together with an explicit communication of its presuppositions, weaknesses, and limitations. This applies to the quality of the input data, underlying heuristics for the choice of the algorithm, strategies for the architecture of the tool, planned knowledge transfer, hypotheses-building, as well as the overarching epistemic model.
In the process of the application, XAI technology and human user enter in a join process of knowledge production. Here, the abilities of the person dealing with the technical system and the affordances of the technical system (Norman 1988) come together and form a milieu of thinking and reasoning (author 2019). I will argue that the aim of this process is to achieve an epistemic ascent in the sense of a cognitive achievement (author 2017). A successful process brings together the affordances of the system, namely explainability and understandability, and the abilities of the human user (such as prior knowledge, education, competencies, skills) into a reflective equilibrium (comp. Goodman 1954).
To achieve an explanation, the user needs to know whether the output (in the form of a knowledge claim) is plausible and robust. My claim is that knowledge about presuppositions, weaknesses, and limitations of the AI tool is needed to formulate good reasons for a justification. This is achieved not only by aligning the design to specific cognitive constraints and affordances, but also by appropriate training of the user. To achieve understanding, the user needs to embed the output (in the form of a knowledge claim) in her or his background knowledge. This cognitive operation is successful if it leads to an epistemic ascend.
The proposed framework allows to derive desirable epistemic and ethical values for the design of rXAI technologies. I will claim that for the design of rXAI technologies, explainability and understandability need to be implemented in such a way that the cognitive constraints and affordances of the system match the abilities of the user. The resulting reflective equilibrium needs to enable satisficing (Simon 1996) cognition and to secure respect for the epistemic sovereignty of the human user.