Erica Lam

Evaluating AI Systems for Moral Patienthood

Speaker: Rob Long, Center for AI Safety Abstract:  AI systems could plausibly deserve moral consideration in the not-too-distant future. Although precise evaluations are difficult to devise in light of moral and empirical uncertainty about AI moral patienthood, they will be an important tool for handling this issue responsibly and for avoiding both under- and over-attributing …

Evaluating AI Systems for Moral Patienthood Read More »

The Hard Proxy Problem: Proxies aren’t intentional, they’re intentional

Date: March 13, 2024 (Wednesday) Speaker: Dr Gabbrielle Johnson, Claremont McKenna College Chair: Dr Frank Hong, The University of Hong Kong       Abstract: This paper concerns the Proxy Problem: often machine learning programs utilize seemingly innocuous features as proxies for socially-sensitive attributes, posing various challenges for the creation of ethical algorithms. I argue …

The Hard Proxy Problem: Proxies aren’t intentional, they’re intentional Read More »

Neutrality, AI, and LLMs

AI Agency and Wellbeing Workshop The workshop explores the following questions: Can a large language model be a cognitive and linguistic agent (in the way that humans are agents)? Can there be such a thing as AI well-being? If the answers are yes, what are the implications? Date: November 21, 2023 Title: Neutrality, AI, and …

Neutrality, AI, and LLMs Read More »

AI, Consciousness, Emotions, Intelligence, and Moral Standing

AI Agency and Wellbeing Workshop The workshop explores the following questions: Can a large language model be a cognitive and linguistic agent (in the way that humans are agents)? Can there be such a thing as AI well-being? If the answers are yes, what are the implications? Date: November 21, 2023 Title: AI, Consciousness, Emotions, …

AI, Consciousness, Emotions, Intelligence, and Moral Standing Read More »

Scroll to Top