Linguistic Relativity, LLMs, and the Hog Thesis

Date: December 5, 2025 (Friday)

Time: 15:30-17:00

Venue: Rm 10.13, 10/F, Run Run Shaw Tower, Centennial Campus, HKU

Speaker: Prof Ryan M. Nefdt, University of Cape Town

Abstract:

There is a veritable cornucopia of options for scholars interested in using the tools of philosophy and cognitive science to interpret the behaviour of large language models like Claude, Gemini and ChatGPT. Many of these views start from a theory honed for the interpretation of humans and then attempt to explore its applicability to LLMs (Chalmers 2025, Mandelkern & Linzen 2024, Lederman & Mahowald 2024, Nefdt 2023). There are alignment issues with such strategies but they can produce insights.

 
In their new bold book, Cappelen & Dever (2025) introduce the ‘Whole Hog Thesis’ which relies on the observable output of LLMs and established conditional relationships between mental states, beliefs, desires and intentions to push a position in which LLMs are considered complete linguistic and cognitive agents. They valiantly challenge the mainstream dialectic and anticipate a number of sophisticated responses to their position.
 
I’m generally sympathetic to hogging it. But in this workshop, I aim to present a few further challenges not considered by C&D and also a framework for supporting their alien contents view in terms of form of linguistic relativity. Specifically, I will put forward the problems of model individuation and embedded textual deepfakes. I will suggest possible responses which can potentially save the hog but not in its entirety, resulting in a somewhat less ebullient ‘Most of the Hog Thesis’.
Scroll to Top