Neutrality, AI, and LLMs

AI Agency and Wellbeing Workshop

The workshop explores the following questions:

  • Can a large language model be a cognitive and linguistic agent (in the way that humans are agents)?
  • Can there be such a thing as AI well-being?
  • If the answers are yes, what are the implications?

Date: November 21, 2023

Title: Neutrality, AI, and LLMs

Speaker: Dr Patrick Greenough, University of St Andrews

Abstract: 

A good thinker should be impartial, unbiased, and objective. (Or so goes the received wisdom.) Should a good AI system be impartial, unbiased, and objective too? Should a good LLM also exhibit the same virtues? (On this score, prominent LLMs explicitly announce that they are designed to be neutral.) Perhaps such neutrality is unachievable. Perhaps it turns out to be a hindrance to being a good source of information or a truly intelligent system. Perhaps we should rather say that intelligent systems are, and should be, biased and partisan. After all, neutrality often gives rise to fencesitting, lack of conviction, and even a kind of self-silencing. In this talk, I explore whether, and in what way, neutrality (in its various forms) is a virtue or a vice of an intelligent system (broadly conceived to include LLM). One conclusion will be that current LLMs are far too neutral to be considered sentient, person-like, or even capable of general intelligence. Such is the Neutrality Problem for LLMs.

Scroll to Top