In recent years, research on healthcare applications of artificial intelligence (AI) has proliferated across clinical processes such as diagnosis and screening of diseases, allocation of healthcare resources, and developing personalised treatments. Enabled by massive amounts of data and higher computing power, artificial intelligence approaches, such as deep learning, involve increasingly complex processes that can make it difficult to understand how algorithms make decisions. Explainability has been considered a major caveat to the adoption of machine learning (ML) algorithms in healthcare. This paper reports the preliminary findings of a qualitative investigation of the perspectives of professional stakeholders working on ML algorithms in diagnosis and screening. The study involved in-depth interviews with clinicians, medical technologists, screening program managers, consumer health representatives, regulators and developers. All participants were unified on the qualities that diagnosis should have: diagnosis should proceed in a way that enabled human oversight, promoted critical thinking among clinicians, and ensured patient safety. However participants were divided on whether explanation was an important means to achieve this end. Broadly, some participants proposed ‘Outcome-assured’ diagnostic practices, while others proposed ‘Explanation-assured’ diagnostic practices, a distinction that applied either with or without the use of AI. ‘Outcome assured’ and ‘Explanation assured’ approaches differed in the significance attributed to explanation in part because they conceptualised explanation differently, not just in relation to what explanation is, but also in relation to the level of explanation and who might be owed an explanation. Levels of explanation range from technical information about coding, medical information for diagnostic decision making, to clinical information for purposes of communicating diagnosis to patients. The ‘Explanation assured’ approach is more demanding than the ‘Outcome assured’ approach: it assumed all of the goods of the latter, but added the need for explanation. The two approaches assumed a different epistemic basis for trustworthiness in healthcare AI. For the explanation-assured approach, explainability was a necessary epistemic tool that allowed contestation by users across different levels of expertise. A culture of contestation enabled by explainability could thus promote trustworthiness in healthcare AI. In contrast, the outcome-assured approach implied that trust can be established in the absence of explainability by appealing to evidence and assurance from experts. We argue that in practice, these two approaches are complementary for clinical diagnosis, as well as trustworthiness in the context of healthcare AI.