The Fair Treatment of Unequal Claims: The Moral Importance of Injecting Randomness into Algorithmic Decision Systems

Abstract:

One compelling approach to value alignment treats value alignment as fundamentally a question of the fairness of the procedure for determining the principles that AI systems should align with (Gabriel 2020). This talk examines the question of how a procedure should treat different stakeholders’ claims fairly, especially in multi-stakeholder cases where people have both fundamental value disagreements and also different strengths of claims. Drawing on a theory of fairness from John Broome (1990), I argue that fairness requires respecting claims in proportion to their strength. Particular properties of the modern data and AI ecosystem, such as the scale at which AI systems can operate, algorithmic monoculture, and data sharing, make Broome’s view compelling for AI systems. This is also a constraint on the allocation principles that AI systems adopt, requiring some randomness in the decision systems themselves.

 
 
 
Scroll to Top