Upcoming Event

Beyond Transitive Closure: Analyzing Non-Transitive Inferential Patterns in LLMs

Date: June 2, 2026 (Tuesday) Time: 3:30pm – 5:00pm Venue: Rm 10.13, Run Run Shaw Tower, Centennial Campus, HKU Speaker: Professor Eduardo Barrio, University of Buenos Aires Registration: Here Abstract: This talk investigates the limits of transitive reasoning in Large Language Models (LLMs), specifically examining their handling of inferential patterns that violate classical structural properties. […]

Beyond Transitive Closure: Analyzing Non-Transitive Inferential Patterns in LLMs Read More »

After ‘Consciousness’ Workshop

Date: January 27-28, 2026 (Tuesday – Wednesday) Time: 10:00am – 17:30pm Venue: Rm 10.13, Run Run Shaw Tower, Centennial Campus, HKU Co-Organized by: HKU-AIH Lab & Berggruen Institute China Keynote Speakers/Titles/Abstracts: Professor Edouard Machery, University of Pittsburgh, Caring and Consciousness Professor Keith Frankish, University of Sheffield, Reactivity as Replacement: Re-Engineering Consciousness for Science and Policy Professor Matti

After ‘Consciousness’ Workshop Read More »

Democracy, Epistemic Agency, and AI – 30 Jan 2026

Date: 30 January 2026 (Friday) Time: 10:00am – 11:30am Venue: Room 4.36, 4/F, Run Run Shaw Tower, Centennial Campus, HKU Speakers: Prof Mark Coeckelbergh, Philosophy, University of Vienna Abstract: Democracy theories assume that citizens have some form of political knowledge. But apart from widespread attention to the phenomenon of fake news and misinformation, less attention has been

Democracy, Epistemic Agency, and AI – 30 Jan 2026 Read More »

(Title to be announced) – 12 Sep 2025

Date: 12 September 2025 (Friday) Time: 1530 – 1700 Venue: Room 10.13, 10/F, Seminar Room, Run Run Shaw Tower, Centennial Campus, HKU Registration: Here Speakers: Prof David Thorstad, Philosophy, Vanderbilt University Abstract: Recent years have seen increasing concern that artificial intelligence may soon pose an existential risk to humanity. One leading ground for concern is

(Title to be announced) – 12 Sep 2025 Read More »

April 26-27, 2025 – Workshop in Philosophy & AI at Peking University

  The Peking University, Beijing Normal University and The University of Hong Kong has jointly organised a workshop on Philosophy and AI, funded by the Research Funds for Excellent Interdisciplinary Research Groups at Beijing Normal University. Dates: April 26-27, 2025Venue: Berggruen Institute China Center, No. 54 Yannan Garden, Peking University, Beijing Host Department of Philosophy

April 26-27, 2025 – Workshop in Philosophy & AI at Peking University Read More »

Artificial intelligence and destabilized moral concepts

Date: April 11, 2025 (Friday) Speaker: Professor Regina Rini Associate Professor Canada Research Chair in Philosophy of Moral and Social Cognition Department of Philosophy, York University PhD, New York University, 2011 B.A., Georgetown University, 2004 Canada Research Chair in Social and Moral Cognition and Associate Professor of Philosophy. Research background in moral psychology, ethical theory,

Artificial intelligence and destabilized moral concepts Read More »

What’s hidden inside predictively successful deep learning models?

Abstract: In recent work, I argued that there is an AI-specific no miracles argument (NMA) which is at least as plausible as the traditional NMA. In this talk, I will express two doubts about this claim, relative to the AI-specific NMA that I previously proposed. I will then develop a new AI-specific NMA in response,

What’s hidden inside predictively successful deep learning models? Read More »

Transparency of What?

Abstract: Despite its importance, the concept of algorithmic transparency has yet to be fully explicated. By asking what is transparent or opaque, we propose a comprehensive framework dividing transparency into four forms: use transparency, which discloses algorithm goals and uses; data transparency, which informs sources, processing, and data quality; model transparency, which explains how the

Transparency of What? Read More »

Scroll to Top