Artificial Intelligence (AI) technologies and machine learning (ML) techniques are starting to occupy a prominent role in medical innovation. While many perceive AI as a technology of promise and hope – one that is allowing for more accurate and efficient diagnosis and treatment – the uptake of these technologies in hospitals remains low. A major reason for this is lack of transparency associated with these technologies. In this paper, we describe how transparency issues frame the development of the AI applications for clinical purposes. We show that grasping the ‘operational logics’ of AI and ML for clinical contexts is not a post-hoc achievement of using the software, but is achieved through involving clinical end-users in the process of development. Drawing on qualitative research with collaborative researchers developing an AI technology for the early diagnosis of a rare respiratory disease (Pulmonary Hypertension/PH) this paper examines how including clinicians and clinical researchers in the collaborative practices of AI developers de-troubles transparency. Our research shows how de-troubling transparency occurs in three dimensions of AI development relating to PH: querying of datasets, building software, and training the model. The close collaboration results in an AI application that is at once social and technological, that de-troubles transparency, easing the way for the continued development of the application to full validation and acceptance in the clinical domain.