Erica Lam

Does the No Miracles Argument Apply to AI?

Speaker: Prof Darrell Rowbottom, Lingnan University Abstract:  According to the standard no miracles argument, science’s predictive success is best explained by the approximate truth of its theories. In contemporary science, however, machine learning systems, such as AlphaFold2, are also remarkably predictively successful. Thus, we might ask what best explains such successes. Might these AIs accurately …

Does the No Miracles Argument Apply to AI? Read More »

AI, Music, and Creativity: International Symposium & 2023/24 Rayson Huang Lectures

Date & Time: Mar 1, 2024 (Fri),  4:00 – 6:30 pmMar 2, 2024 (Sat),  9:30 am – 6:30 pm Programme Rundown: here AI, Music, and Creativity: At a Crossroads is an international symposium jointly hosted by the HKU Department of Music and the AI & Humanity Lab at the Department of Philosophy. This event is a …

AI, Music, and Creativity: International Symposium & 2023/24 Rayson Huang Lectures Read More »

Measuring Capabilities and Safety

Speaker: Dr Dan Hendrycks, Centre for AI Safety Abstract Dr Hendrycks will discuss principles for measuring capabilities of AI systems, and walk through popular general capabilities benchmarks. He will then discuss how general capabilities can be separated from the measurement of their safety, then overview new ways to measure safety.

A Network Account of Trustworthy AI: A New Framework

Date: March 8, 2024 (Friday) Speaker: Prof Song Fei, Lingnan University Chair: Dr Frank Hong, The University of Hong Kong Abstract: In the literature on trustworthy AI, there are two main positions. The first view is in order to illuminating trustworthy AI, we need a general account of trust and trustworthiness, which is less anthropocentric. Because …

A Network Account of Trustworthy AI: A New Framework Read More »

Can Machines Manipulate Us?

“YouTube, Facebook, X (formerly Twitter), Instagram, Pinterest, TikTok, and chatbots like ChatGPT and Bard—all of them have the power to manipulate you ……..” Check out this recent paper written by Rachel Sterken, our Department Head of Philosophy at The University of Hong Kong, along with Jessica Pepp and Eliot Michaelson. Read the full article here

Regulation by Benchmark

Speaker: Dr Peter Salib, The University of Houston Abstract:  Assume that we succeed in crafting effective safety benchmarks for frontier AI systems. By “effective,” I mean benchmarks that are both aimed at measuring the riskiest capabilities and able to reliably measure them. It would then seem sensible to integrate those benchmarks into safety laws governing …

Regulation by Benchmark Read More »

Scroll to Top