anna

AIH symposium

Date: February 14, 2025 (Friday) Time: 13:00 – 18:00 Venue: 11/F, Cheng Yu Tung Tower, The University of Hong Kong   Registration: here Iason Gabriel, Staff Research Scientist, Google Deepmind Geoffrey Keeling, Senior Research Scientist, Google

Alignment

Abstract: The speaker will distinguish some different conceptions of alignment, exploring how each conception relates to safety and existential risk.

AI and the future of consciousness science

Date: December 4, 2024 (Wednesday) Time: 13:00 – 15:00 Venue: G/F, Seminar Room, HKU Main Library Registration: here Speaker: Dr Henry Shevlin, University of Cambridge Chair: Dr Frank Hong, The University of Hong Kong Abstract: Recent rapid progress in artificial intelligence has prompted renewed interest in the possibility of consciousness in artificial systems. This talk argues that this question forces …

AI and the future of consciousness science Read More »

Epistemic opportunities: AI Systems as Cognitive Scaffolding

Date: December 6, 2024 (Friday) Time: 13:00 – 15:00 Venue: 2/F, MPA, HKU Main Library Registration: here Speaker: Dr Karina Vold, University of Toronto Chair: Dr Frank Hong, The University of Hong Kong Abstract: Artificial intelligence (AI) systems are increasingly able to outperform humans at specific tasks, such as beating world champions at Go and solving 50-year-old grand challenges in …

Epistemic opportunities: AI Systems as Cognitive Scaffolding Read More »

Philosophy of AI in Asia Workshop

Date: March 26-27, 2025 (Wednesday, Thursday) Time: 09:30 – 16:30 (Programme rundown will be provided later) Venue: 11/F, Cheng Yu Tung Tower, HKU Registration: here Dr Frank Hong, The University of Hong Kong Prof Darrell Rowbottom, Lingnan University Prof Jiji Zhang, The Chinese University of Hong Kong Dr Pak-Hang Wong, Hong Kong Baptist University Dr Jun Otsuka, Kyoto University …

Philosophy of AI in Asia Workshop Read More »

What remains of the singularity hypothesis?

Abstract: The idea that advances in the cognitive capacities of foundation models like LLMs will lead to a period of rapid, recursive self-improvement — an “intelligence explosion” or “technological singularity” — has recently come under sustained criticism by academic philosophers. I evaluate the extent to which this criticism successfully undermines the argument for a singularity, …

What remains of the singularity hypothesis? Read More »

Meaning and the Mechanics of Production in LLMs (with Kyle Mahowald)

Abstract: A common view is that a normal human assertion is successful only if (i) the speaker utters words which express what they intend to express, and (ii) the hearer comes to believe what is expressed. We examine how this picture looks on the hypothesis that LLMs can make assertions. In particular, we ask whether …

Meaning and the Mechanics of Production in LLMs (with Kyle Mahowald) Read More »

Scroll to Top