Situated explainability in and for autonomous vehicles (AVs)

In the course of recent developments, the development of good AI has become a challenge for society as a whole, and explainability is generally considered to be a central factor in achieving this. However, I assume that (i) explainability is not of absolute, but only of instrumental value. What follows practically is that (ii) explainability is not needed at all times and in all places, but only when something has become questionable in a given context. Now, for some deployments there seem to be little variances, e.g. seeking robust knowledge in research [4], building-trust for innovative business [1], or ensuring accountability in healthcare [6]. Yet other popular applications fields such as autonomous vehicles (AV) face a variety of different context and social constellations. Consequently, we need to start by asking under what conditions, for whom, why and what for is explainability is (expected to be) useful ? To do justice to this situated explanability, I will (iii) not follow the logic of scientific explanations, but conceptualize explanation as a social process that can analytically be grasped through the interplay and dynamics of four constitutive components (explainer, explainee, explanadum, explanans) [7]. Here, I will examine three particular cases: (a) explainability for optimizing an experimental vehicle (the AutoNOMOS project as reported by [2]), (b) explainability for solving liability issues (the proposition of an “Ethical Black Box” [8]), and (c) explainability to handle mixed-traffic-situations (external human-vehicle-interfaces [5]). In so doing, I want to make a plea for a richer understanding of explainability than the tradition of the Hempel-Oppenheim scheme [3] can offer. 

[1] Vijay Arya et al. “One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques”. In: CoRR abs/1909.03012 (2019). arXiv: 1909.03012. 
 

[2] Göde Both. Keeping Autonomous Driving Alive. An Ethnography of Visions, Masculinity and Fragility. Opladen o.a. 2020. 
 

[3] Carl G. Hempel and Paul Oppenheim. “Studies in the Logic of Explanation”. In: Philosophy of Science 15.2 (1948), pp. 135–175. doi: 10.1086/286983. 
 

[4] Paul Humphreys. Extending ourselves. Computational science, empiricism, and scientific method. Includes bibliographical references (p. 157-167). New York o.a. 2004. 

[5] Yee Mun Lee et al. “Understanding the Messages Conveyed by Automated Vehicles”. In: Proceed- ings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. AutomotiveUI ’19. Utrecht, Netherlands: Association for Computing Machinery, 2019, pp. 134–143. doi: 10.1145/3342197.3344546. 

[6] Thomas Ploug and Søren Holm. “The four dimensions of contestable AI diagnostics – A patient- centric approach to explainable AI”. In: Artificial Intelligence in Medicine 107 (2020). doi: https://doi.org/10.1016/j.artmed.2020.101901 

[7] Katharina J. Rohlfing et al. “Explanation as a Social Practice: Toward a Conceptual Framework for the Social Design of AI Systems”. In: IEEE Transactions on Cognitive and Developmental Systems 13.3 (2021), pp. 717–728. doi: 10.1109/TCDS.2020.3044366. 
 

[8] Alan F. T. Winfield and Marina Jirotka. “The Case for an Ethical Black Box”. In: Towards Autonomous Robotic Systems. Ed. by Yang Gao et al. Cham: Springer 2017, pp. 262–273. 

Advertisement