Should we Trust (or care about) what ChatGPT Tells us About Itself

AI Agency and Wellbeing Workshop

The workshop explores the following questions:

  • Can a large language model be a cognitive and linguistic agent (in the way that humans are agents)?
  • Can there be such a thing as AI well-being?
  • If the answers are yes, what are the implications?

Date: November 22, 2023

Title: Should we Trust (or care about) what ChatGPT Tells us About Itself 

Speakers:

Prof Herman Cappelen, The University of Hong Kong

Prof Josh Dever, The University of Texas at Austin

Abstract: 

It is easy to get large language models to talk about themselves. They tell us that they enjoy answering questions, claim to understand our questions, and express a desire to help users. If we accept these self-reports at face value, it implies that LLMs possess intricate inner experiences, can comprehend natural languages, and engage in intentional actions. We argue that self-reports can play a vital role in theorizing about LLMs, which is not fundamentally different from the roles played by human self-reports and introspective accounts.

Scroll to Top