The Hard Proxy Problem: Proxies aren’t intentional, they’re intentional

Date: March 13, 2024 (Wednesday)

Speaker: Dr Gabbrielle Johnson, Claremont McKenna College

Chair: Dr Frank Hong, The University of Hong Kong

 

 

 

Abstract:

This paper concerns the Proxy Problem: often machine learning programs utilize seemingly innocuous features as proxies for socially-sensitive attributes, posing various challenges for the creation of ethical algorithms. I argue that to address this problem, we must first settle a prior question of what it means for an algorithm that only has access to seemingly neutral features to be using those features as “proxies” for, and so to be making decisions on the basis of, protected-class features. Borrowing resources from philosophy of mind and language, I argue the answer depends on whether discrimination against those protected classes explains the algorithm’s selection of individuals. This approach rules out standard theories of proxy discrimination in law and computer science that rely on overly intellectual views of agent intentions or on overly deflationary views that reduce proxy use to statistical correlation. Instead, my theory highlights two distinct ways an algorithm can reason using proxies: either the proxies themselves are meaningfully about the protected classes, highlighting a new kind of intentional content for philosophical theories in mind and language; or the algorithm explicitly represents the protected-class features themselves, and proxy discrimination becomes regular, old, run-of-the-mill discrimination.

Scroll to Top