anna

The Fair Treatment of Unequal Claims: The Moral Importance of Injecting Randomness into Algorithmic Decision Systems

Abstract: One compelling approach to value alignment treats value alignment as fundamentally a question of the fairness of the procedure for determining the principles that AI systems should align with (Gabriel 2020). This talk examines the question of how a procedure should treat different stakeholders’ claims fairly, especially in multi-stakeholder cases where people have both …

The Fair Treatment of Unequal Claims: The Moral Importance of Injecting Randomness into Algorithmic Decision Systems Read More »

AI Consciousness and Well-Being

Date: March 5-6, 2025 (Wednesday, Thursday) Time: 10:00 – 17:00 (Programme rundown will be provided later) Venue: 3/F, MPZ Room 2, HKU Main Library Registration: here Desire homuncularism – Agency, ethical standing, and skin in the game Prof Steve Petersen,  Niagara University AI Zombies: Global Workspace Theory and Resource-Rational Analysis Prof Justin Tiehen,  University of Puget Sound It probably …

AI Consciousness and Well-Being Read More »

A Matter of Principle? AI alignment as the Fair Treatment of Claims (co-authored with Geoffrey Keeling)

Abstract: The normative challenge of AI alignment centres upon what goals or values to encode in AI systems to govern their behaviour. A number of answers have been proposed, including the notion that AI must be aligned with human intentions or that it should aim to be helpful, honest and harmless. Nonetheless, both accounts suffer …

A Matter of Principle? AI alignment as the Fair Treatment of Claims (co-authored with Geoffrey Keeling) Read More »

Evaluating AI Agents for Dangerous (Cognitive) Capabilities

Abstract: AI agents based on Large Language Models (LLMs) demonstrate human-level performance at some theory of mind (ToM) tasks (Kosinski 2024; Street et al. 2024). Here ToM is roughly the ability to predict and explain behaviour by attributing mental states to oneself and others. ToM capabilities matter for AI safety because, at least in humans, …

Evaluating AI Agents for Dangerous (Cognitive) Capabilities Read More »

Philosophical Commitments to LLM Evaluations: The Problem of Moving Goalposts and Observational Relativity

Date: February 7, 2025 (Friday) Speaker: Ms Ninell Oldenburg, University of Copenhagen Chair: Dr Frank Hong, The University of Hong Kong Abstract: While most technical and philosophical research on LLMs tries to find a translation from human functions to computational principles, understanding the flip side — how computational principles translate into natural ones — is often overlooked. …

Philosophical Commitments to LLM Evaluations: The Problem of Moving Goalposts and Observational Relativity Read More »

Future science and artificial consciousness

Date: February 21, 2025 (Friday) Speaker: Dr Leonard Dung, Ruhr-University Bochum Chair: Dr Frank Hong, The University of Hong Kong Abstract: Does consciousness require biology or can systems made out of other materials be conscious? I develop an argument for the view that it is (nomologically) possible that some non-biological creatures are conscious, including conventional, silicon-based AI …

Future science and artificial consciousness Read More »

The Ethics of Amplification

Date: February 28, 2025 (Friday) Time: 13:00 – 15:00 Venue: Rm 10.13, Run Run Shaw Tower, Centennial Campus, The University of Hong Kong Registration: here Speaker: Prof Jeffrey Howard, University College London Chair: Dr Frank Hong, The University of Hong Kong Abstract: Social media platforms’ AI systems learn from user data to predict what content will keep users engaged. …

The Ethics of Amplification Read More »

The Philosophy of AI: Themes from Seth Lazar

Date: January 17, 2025 (Friday) Time: 13:00 – 18:00 Organizers: AIH Lab and Hong Kong Ethics Lab Prof Seth Lazar, Australian National University Agent Advocates: Some Questions and Issues Prof Nikolaj Jang Lee Linding Pedersen, Yonsei University AI Personhood: A Cost-sensitive Analysis Dr Yiwen Zhan, Beijing Normal University Steering Towards Utopia: The Importance of the Medium Term in AI …

The Philosophy of AI: Themes from Seth Lazar Read More »

The Philosophy of AI: Themes from Iason Gabriel

Date: February 14, 2025 (Friday) Organizers: AIH Lab, Programme on Artificial Intelligence and the Law and Hong Kong Ethics Lab A Matter of Principle? AI alignment as the Fair Treatment of Claims (co-authored with Geoffrey Keeling) Iason Gabriel, Staff Research Scientist, Google Deepmind Evaluating AI Agents for Dangerous (Cognitive) Capabilities Geoffrey Keeling, Senior Research Scientist, Google The Fair Treatment of …

The Philosophy of AI: Themes from Iason Gabriel Read More »

Alignment

Abstract: The speaker will distinguish some different conceptions of alignment, exploring how each conception relates to safety and existential risk.

Scroll to Top