Understanding, Artificial Intelligence, and moral characters

In recent years, philosophers of science and technology ethicists have paid special attention to artificial intelligence (AI). One of their concerns is that AI systems that depend on machine learning techniques in neural networks produce outputs that even their makers do not know why specific patterns are extracted from a given dataset. These AI systems are “non-transparent”, “opaque”, or “black boxes”. (see Burrell 2016; Müller 2021a; for a distinction between three kinds of opacity, see Müller 2021b). 
There are different arguments against opaque AI systems. DARPA’s explainable artificial intelligence (XAI) program implies that opacity conflicts with values such as trust and manageability (Gunning and Aha 2019). It has also been argued that unexplainable systems are dehumanizing. They threaten our participation in decision-making processes, our knowledge of how AI systems influence us, and our opportunity to actualize ourselves through making decisions (Colaner 2021). On the other hand, Zerilli et al. (2019) argue that those expressing the need for transparency in algorithmic decision-making hold too high standards, which is neither necessary nor helpful for human judgments. 
In this paper, I argue that understandable AI systems are necessary and helpful. My argument is based on the view that technology is inherently normative. That is, for a functional system to work in a stable and reproducible manner, it requires specific social and technical contexts. AI systems are kinds of technology, so they are inherently normative too. In order for them to be realized well, our socio-technical conditions should be appropriately changed (see Radder 2019, chapters 2 and 5-8). On this basis, I would argue that opaque AI processes cohere with human characters who are indifferent to the reasons of decisions and actions. As a result, even if AI systems do not result in explicitly immoral consequences, they fit well into a society whose members do not ask moral and political questions. It can even be argued that opaque systems foster characters that “should” be indifferent to the realm of reason. 
The argument of this paper is compatible with, yet different from, the mentioned ones against opaque AI. I concentrate on the moral characters of (the members of) societies that employ AI systems and argue that critical understanding of actions and decisions should be cultivated in the characters of humans living in the age of AI technologies. Non-transparent AI systems have an embedded structure that undermines what Shannon Vallor (2016, section 6.13) calls “technomoral wisdom”. However, understandable AI supports this wisdom. Given that practical wisdom is realized in the communities of moral characters (Howard 2018, section 3), this paper also explain that the communities of AI researchers, engineers, entrepreneurs, and policymakers should act to realize transparent systems. Moreover, democracy – as the political context of transparent AI – should be institutionalized in relevant corporations and governmental bodies (on the political side of AI, see Danaher 2016; Cave 2019; see also Solove 2001; O’Neil 2016; on the virtue ethics of technology, see also Ratti and Stapleford 2021). 

** References 
Burrell, Jenna. (2016). How the Machine Thinks: Understanding Opacity in Machine Learning Systems. Big Data and Society 3(1): 1–12 
Cave, Stephen. (2019). To save us from a Kafkaesque future, we must democratise AI. In The Guardian, vol. 04.01.2019. 
Coeckelbergh, Mark. (2022). The Political Philosophy of AI: An Introduction. New Jersey: Wiley. 
Colaner, Nathan. (2021). Is explainable artificial intelligence intrinsically valuable? AI & Society: 1-8. 
Danaher, John. (2016). The Threat of Algocracy: Reality, Resistance and Accommodation. Philosophy & Technology, 29(3): 245-268. 
Gunning, David, and David Aha. (2019). DARPA’s explainable artificial intelligence (XAI) program. AI Magazine 40.2 (2019): 44-58. 
Howard, Don. (2018). Technomoral Civic Virtues: A Critical Appreciation of Shannon Vallor’s Technology and the Virtues. Philosophy & Technology, 31(2): 293-304. 
Müller, Vincent C. (2021a). Ethics of artificial intelligence and robotics. In The Stanford encyclopedia of philosophy, edited by Edward N. Zalta. https://plato.stanford.edu/archives/sum2021/entries/ethics-ai/. 
Müller, Vincent C. (2021b). Deep opacity undermines data protection and explainable artificial intelligence. In AISB 2021 Symposium Proceedings: Overcoming Opacity in Machine Learning, 18 – 21. 
O’Neil, Cathy. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. New York: Crown Publishers. 
Radder, Hans. (2019). From commodification to the common good: Reconstructing science, technology, and society. Pittsburgh: University of Pittsburgh Press. 
Ratti, Emanuele, and Thomas A. Stapleford, (2021 eds.). Science, Technology, and Virtues: Contemporary Perspectives. Oxford: Oxford University Press. 
Solove, Daniel J. (2001). Privacy and Power: Computer Databases and Metaphors for Information Privacy. Stanford Law Review, 53:1393-1462. 
Vallor, Shannon. (2016). Technology and the virtues: A philosophical guide to a future worth wanting. Oxford: Oxford University Press. 
Zerilli, John, Alistair Knott, James Maclaurin, and Colin Gavaghan. (2019). Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard? Philosophy & Technology, 32(4): 661–683.

Advertisement