A constant source of concern regarding artificial intelligence approaches to a problem is that, in many instances, the solutions advanced cannot be reflectively checked: it is unclear why the solution holds even if it does go through. Explainable Artificial Intelligence (AI) has been advanced in part to address this concern. However, what kind of explanation could be reasonably expected in this context? I argue that none of the usual models of explanation (deductive-nomological accounts, pragmatics of explanation, unification views, and interventionist approaches) seem to be applicable to this AI approach and that, as a result, either some sui generis notion of explanation is required or explanation is not the proper category to be used in characterizing Explainable AI. I close by drawing some lessons going forward.