The Limits of Explainability for Reducing Algorithmic Discrimination

Date: April 5, 2024 (Friday)

Speaker: Dr Kate Vredenburgh, London School of Economics

Chair: Dr Frank Hong, The University of Hong Kong

Abstract: 

Proponents of algorithmic decision-making have argued that the use of algorithms can reduce discrimination, against the baseline of human decision-making. One reason is the greater explainability of the models, or the ability to provide information so that people can understand how inputs influence outputs. This talk examines the relationship between discrimination and explainability. I will argue that people are at least as explainable as AI for the purposes of detecting discrimination, and that explanations of particular algorithmic decisions are of limited use in providing legal proof to combat discrimination. These two claims should lead us to think that algorithmic decision-making is not preferable to human decision-making on the grounds that discrimination can be made more transparent and provable.

Scroll to Top