Speaker: Rob Long, Center for AI Safety
Abstract:
AI systems could plausibly deserve moral consideration in the not-too-distant future. Although precise evaluations are difficult to devise in light of moral and empirical uncertainty about AI moral patienthood, they will be an important tool for handling this issue responsibly and for avoiding both under- and over-attributing moral patienthood to AI systems. This talk reports on a recently completed project that suggested evaluations and lab procedures at the request of a leading AI lab.