Erica Lam

Conditional Analysis of Model Abilities

Speaker: Jacqueline Harding, Stanford University Abstract:  What can contemporary machine learning (ML) models do? Given the proliferation of ML models in society, answering this question matters to a variety of stakeholders, both public and private. The evaluation of models’ capabilities is rapidly emerging as a key subfield of modern ML, buoyed by regulatory attention and …

Conditional Analysis of Model Abilities Read More »

Does the No Miracles Argument Apply to AI?

Speaker: Prof Darrell Rowbottom, Lingnan University Abstract:  According to the standard no miracles argument, science’s predictive success is best explained by the approximate truth of its theories. In contemporary science, however, machine learning systems, such as AlphaFold2, are also remarkably predictively successful. Thus, we might ask what best explains such successes. Might these AIs accurately …

Does the No Miracles Argument Apply to AI? Read More »

AI, Music, and Creativity: International Symposium & 2023/24 Rayson Huang Lectures

Date & Time: Mar 1, 2024 (Fri),  4:00 – 6:30 pmMar 2, 2024 (Sat),  9:30 am – 6:30 pm Programme Rundown: here AI, Music, and Creativity: At a Crossroads is an international symposium jointly hosted by the HKU Department of Music and the AI & Humanity Lab at the Department of Philosophy. This event is a …

AI, Music, and Creativity: International Symposium & 2023/24 Rayson Huang Lectures Read More »

Measuring Capabilities and Safety

Speaker: Dr Dan Hendrycks, Centre for AI Safety Abstract Dr Hendrycks will discuss principles for measuring capabilities of AI systems, and walk through popular general capabilities benchmarks. He will then discuss how general capabilities can be separated from the measurement of their safety, then overview new ways to measure safety.

A Network Account of Trustworthy AI: A New Framework

Date: March 8, 2024 (Friday) Speaker: Prof Song Fei, Lingnan University Chair: Dr Frank Hong, The University of Hong Kong Abstract: In the literature on trustworthy AI, there are two main positions. The first view is in order to illuminating trustworthy AI, we need a general account of trust and trustworthiness, which is less anthropocentric. Because …

A Network Account of Trustworthy AI: A New Framework Read More »

Can Machines Manipulate Us?

“YouTube, Facebook, X (formerly Twitter), Instagram, Pinterest, TikTok, and chatbots like ChatGPT and Bard—all of them have the power to manipulate you ……..” Check out this recent paper written by Rachel Sterken, our Department Head of Philosophy at The University of Hong Kong, along with Jessica Pepp and Eliot Michaelson. Read the full article here

Scroll to Top