Machine concepts in hybrid decision making: between ethics and epistemology

In our presentation we focus on the phenomenon of hybrid (AI + human agent) decision making. We aim to problematize the integration of machine output into the reasoning process of the human agent, which supports decisions about action. We approach this problem from the viewpoint of two normative disciplines: ethics (normativity of action, including the action of using machine output) and epistemology (normativity of knowledge). The overarching goal of our inquiry is to understand the proper ways of integrating AI output into the decision process. It is our premise that, in order to answer this ethical question, we must first arrive at a realistic understanding of what type of epistemic product AI output is, as well as its limitations. A significant part of this is accounting for the discrepancy between AI epistemology and human epistemology and deducing the appropriate role for AI epistemic processes and products in the reasoning process of the human (as a basis for normative science of hybrid reasoning). The interconnection of these two normative domains lead to a conceptualization of epistemo-ethical constraints on the hybrid decision making process. 

Based on our resent paper [the reference is omitted for the purposes of the blind review], we will problematize hybrid decision making in terms of reasons for action (which is especially important in the application of AI in high-risk domains). What matters for a justified, well-grounded decision is whether the output of an AI system can be taken as a justificatory reason for action (i.e. the consideration that justifies the choice). We argue that it is far from obvious that it should. Getting to more specific examples, we argue against a tempting and widespread tendency to interpret the output of neural network-based classification algorithms of the type “30% X, 70% Y” as “more likely Y”. We show that the cognitive jump from the first to the second is not warranted, and that neither a probabilistic (frequency of events) interpretation of the machine output is grounded, nor an interpretation of it in the sense of identification of a new instance of a certain class (such as Y). 

Taking the topic further, we pose the question: How are we to interpret the machine outcome in connection with human epistemic processes? We suggest that we need to introduce a new type of epistemic product—machine concepts, which are different from human ones in the ways they are produced, structured and the way they relate to the outside world (cf. Sullivan 2019). Two distinctive features of these concepts are: (a) necessary fuzziness (even when the corresponding human concepts are hard, i.e. have crisp membership boundaries, cf. Zadeh 1965, 1975, 1999; Belis 2007) and (b) having a predictive claim (in virtue of the design and marketing of AI products). With respect to (a) we want to discuss: How are we to deal with the fuzziness of the AI output? How does it affect our epistemic relationship with the objective observations? How does it relate to the fuzziness of the entire reasoning process (given that machine output does not exhaust the entire hybrid decision making process)? With respect to (b): What are we to make of the predictive claim of AI, given that its predictive power is questionable? What alternative concept expresses the nature of AI outputs better? Given that AI fuzzifies all elements of the reasoning process under uncertainty (while not all of them are fuzzy) how are we to deal with different degrees and types of uncertainty in hybrid reasoning? How does the fuzzy nature of machine concepts change (should change) the way we use machine concepts for arriving at decisions? 

Advertisement