In April 2018 the United States Food and Drug Administration (FDA), for the first time, permitted marketing of a medical device utilizing artificial intelligence (AI). Based on an analysis of retinal images the tool produces one of two possible outputs: (1) “more than mild diabetic retinopathy detected: refer to an eye care professional” or (2) “negative for more than mild diabetic retinopathy; rescreen in 12 months.” No clinician is involved in the interpretation of the image.
IDx-DR is intended to be used by healthcare professionals to make screening decisions about diabetes patients. This use exemplifies what I will refer to as algorithmic decision-making: A decision-maker defers decisions that allocate benefits and harms to the output of a predictive algorithm with little or no human input (Binns 2018, 543).
A key reason for turning to algorithmic decision-making in health and elsewhere is its epistemic value when decisions must be made under uncertainty. Machine learning algorithms can become impressively accurate at predicting whether an individual has an unknown feature of interest e.g., a medical condition. However, when deferring consequential decisions about people to an algorithmic output a key issue concerns the justification for using this procedure.
Drawing on work on political legitimacy I discuss what epistemic and moral standards an algorithmic decision-making procedure must meet to be legitimate. Algorithmic legitimacy may be understood in both a descriptive and a normative sense. I will focus on legitimacy in a normative sense according to which algorithmic legitimacy requires that the procedure meets certain epistemic and moral standards, which makes it acceptable to all reasonable stakeholders.
Proponents of instrumentalism evaluate the legitimacy of a decision procedure relative to its ability to generate decisions which are correct. On this view legitimacy is procedure independent. In contrast, pure proceduralism takes legitimacy to be exclusively a matter of features intrinsic to the procedure. On this view, using IDx-DR will be legitimate if it meets certain moral and epistemic standards, regardless of the degree to which it produces correct decisions.
I argue that neither approach is satisfactory in the context of algorithmic decision-making. While the legitimacy of algorithmic decision-making is, I contend, in part due to its epistemic value when making decisions under uncertainty, concerns about inscrutability and fairness shows that a wholly instrumental account of algorithmic legitimacy is implausible. Procedural features must also be accounted for to ensure legitimacy. To this end I sketch an account of epistemic algorithmic legitimacy drawing on Estlund’s influential work on political legitimacy (Estlund 2008).
Binns, R. Algorithmic Accountability and Public Reason. Philos. Technol. 31, 543–556 (2018). https://doi.org/10.1007/s13347-017-0263-5
Estlund, David. Democratic Authority. Princeton, NJ: Princeton University