Artificial intelligence (AI) can provide the basis for tools that improve global health care, bringing us closer to the realization of the third sustainable development goal: good health and well-being for all.
However, before AI-based tools are integrated in medical practice and applied to patients, it must be demonstrated that they serve the intended purpose without unintended effects. Of the AI-based tools that are currently used in medical practice, most were developed and regulated for a limited (e.g., national) public. Consequently, the adoption of AI-based tools is fragmented across the globe. As health is an issue that transcends borders, ITU/WHO FG-AI4H encourages a collective effort among stakeholders (including developers, regulators, healthcare practitioners, and public health institutes) from across the globe to ensure the safety and trustworthiness of AI-based tools and to permit their widespread implementation.
This Open Code Project aims to produce the digital building blocks (six software packages) that compose the FG-AI4H Assessment Platform. The assessment platform, which can be distinguished from AI “challenge” platforms through its consideration of regulatory and ethics guidelines, and the needs of other AI for health stakeholders, supports the end-to-end assessment of AI for health algorithms.