The use of Artificially Intelligent (AI) algorithms has led to impressive results in various fields in medicine, e.g. in the detection or prediction of diseases [1] [2] [3]. But such algorithms are often opaque even to experts [4]. This raises an important epistemological question in the medical context: Does the opacity of AI algorithms undermine the ability of medical doctors to acquire knowledge on the basis of their outputs? Consider:
An AI algorithm, which has learned to predict the risk of breast cancer recurring within two years after surgery on the basis of next generation sequencing data, is used to identify rare, at-risk patients. Medical doctor MD receives the algorithm’s prognosis and decides on its basis whether to prescribe adjuvant treatment. The algorithm’s performance in trials was satisfactory, and when MD himself used it, he couldn’t find anything amiss. This evidence makes it very likely that this is a well-working algorithm, and that its outputs are true. On a particular occasion, the algorithm predicts, and MD therefore believes, that patient P is not at risk. MD’s belief is true.
MD employs an opaque algorithm that is not immune to errors (such as biases, see [5]) and that he is unable to check for its reliability, while a lot hinges on his making the right decision. We argue that these combined factors undermine MD’s knowledge – he is lucky to form a true belief based on the algorithm’s output, and knowledge is incompatible with luck [6].
We examine several philosophical approaches to knowledge that might explain how MD’s knowledge is undermined. After dismissing appeals to Gettier and to Sensitivity, we argue that Whiting’s Safety condition on knowledge [7] brings out the sense in which MD’s belief is lucky.
Safety: If S were to believe that p, then relative to S’s perspective, it would not be false that p.
We argue that relative to MD’s perspective, the belief that P is not at risk could easily be false, despite MD’s evidence that the algorithm functions reliably. The algorithm’s opacity, the impossibility to check whether the algorithm’s outputs are correct, and the well-known ways in which algorithms may fail, against the backdrop of the high stakes of the situation, may reasonably lead MD to doubt that the algorithm is well-functioning and that its output is true. In light of this doubt, the possibility that P is at risk despite the algorithm’s output is not farfetched. So, from MD’s perspective, the counterfactual situation in which MD’s belief is false is quite similar to his actual situation. Given that Safety is necessary for knowledge, MD doesn’t know that P is not at risk.
We end the paper by sketching some ways in which the described problem might be overcome. Other, independent sources may enable knowledge where an algorithm’s output alone cannot. Further, making algorithms more transparent – overcoming some of their opacity – may fix the problem [8] [9] [10].
References
[1] Baldwin, D. R., Gustafson, J., Pickup, L., Arteta, C., Novotny, P., Declerck, J., … Gleeson, F. V. (2020). External validation of a convolutional neural network artificial intelligence tool to predict malignancy in pulmonary nodules. Thorax, 75(4), 306-312. doi:10.1136/thoraxjnl-2019-214104
[2] Dembrower, K., Liu, Y., Azizpour, H., Eklund, M., Smith, K., Lindholm, P., & Strand, F. (2020). Comparison of a deep learning risk score and standard mammographic density score for breast cancer risk prediction. Radiology, 294(2), 265-272.
[3] Liu, C., Liu, X., Wu, F., Xie, M., Feng, Y., & Hu, C. (2018). Using artificial intelligence (Watson for Oncology) for treatment recommendations amongst Chinese patients with lung cancer: feasibility study. Journal of medical Internet research, 20(9), e11087.
[4] Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The Ethics of Algorithms: Mapping the Debate. Big Data & Society, 3(2), https://doi.org/10.1177/2053951716679679
[5] Kordzadeh, N., & Ghasemaghaei, M. (2021). Algorithmic bias: review, synthesis, and future research directions. European Journal of Information Systems, 1-22.
[6] Pritchard, D. (2005). Epistemic luck: Clarendon Press.
[7] Whiting, D. (2020). Knowledge, justification, and (a sort of) safe belief. Synthese, 197(8), 3593-3609.
[8] Dahl, E. (2018). Appraising black-boxed technology: the positive prospects. Philosophy & Technology, 31(4), 571-591.
[9] Grindrod, J. (2019). Computational beliefs. Inquiry, 1-22.
[10] Miller, B., & Record, I. (2013). Justified belief in a digital age: On the epistemic implications of secret Internet technologies. Episteme, 10(2), 117-134.