Abstract:
Making AI explainable has become an increasingly urgent demand, and one of the popular approaches is to generate so-called counterfactual explanations. In this talk, I attempt a philosophical critique of the key aspects of this approach, and raise an important challenge arising from a tension between two constraints on a good counterfactual explanation in this context. I suggest a way to meet the challenge based on recent work on counterfactual generation.