Upcoming Event

The Fair Treatment of Unequal Claims: The Moral Importance of Injecting Randomness into Algorithmic Decision Systems

Abstract: One compelling approach to value alignment treats value alignment as fundamentally a question of the fairness of the procedure for determining the principles that AI systems should align with (Gabriel 2020). This talk examines the question of how a procedure should treat different stakeholders’ claims fairly, especially in multi-stakeholder cases where people have both …

The Fair Treatment of Unequal Claims: The Moral Importance of Injecting Randomness into Algorithmic Decision Systems Read More »

AI Consciousness and Well-Being

Date: March 5-6, 2025 (Wednesday, Thursday) Time: 10:00 – 17:00 (Programme rundown will be provided later) Venue: 3/F, MPZ Room 2, HKU Main Library Registration: here Desire homuncularism – Agency, ethical standing, and skin in the game Prof Steve Petersen,  Niagara University AI Zombies: Global Workspace Theory and Resource-Rational Analysis Prof Justin Tiehen,  University of Puget Sound It probably …

AI Consciousness and Well-Being Read More »

A Matter of Principle? AI alignment as the Fair Treatment of Claims (co-authored with Geoffrey Keeling)

Abstract: The normative challenge of AI alignment centres upon what goals or values to encode in AI systems to govern their behaviour. A number of answers have been proposed, including the notion that AI must be aligned with human intentions or that it should aim to be helpful, honest and harmless. Nonetheless, both accounts suffer …

A Matter of Principle? AI alignment as the Fair Treatment of Claims (co-authored with Geoffrey Keeling) Read More »

Evaluating AI Agents for Dangerous (Cognitive) Capabilities

Abstract: AI agents based on Large Language Models (LLMs) demonstrate human-level performance at some theory of mind (ToM) tasks (Kosinski 2024; Street et al. 2024). Here ToM is roughly the ability to predict and explain behaviour by attributing mental states to oneself and others. ToM capabilities matter for AI safety because, at least in humans, …

Evaluating AI Agents for Dangerous (Cognitive) Capabilities Read More »

The Ethics of Amplification

Date: February 28, 2025 (Friday) Time: 13:00 – 15:00 Venue: Rm 10.13, Run Run Shaw Tower, Centennial Campus, The University of Hong Kong Registration: here Speaker: Prof Jeffrey Howard, University College London Chair: Dr Frank Hong, The University of Hong Kong Abstract: Social media platforms’ AI systems learn from user data to predict what content will keep users engaged. …

The Ethics of Amplification Read More »

Alignment

Abstract: The speaker will distinguish some different conceptions of alignment, exploring how each conception relates to safety and existential risk.

Scroll to Top