The Dangers and (potential) Incoherence of AI Alignment
Speaker: Prof Herman Cappelen, The University of Hong Kong Abstract: This paper makes three main claims: 1) There is no universal set of core values for AI systems to align with – moral disagreements run too deep. 2) The increased causal potency that creates the problem of AI risk also implies that imperfect safety mechanisms …
The Dangers and (potential) Incoherence of AI Alignment Read More »