Abstract:
LLMs have dramatically improved in capabilities in recent years. This raises the question of whether LLMs could become genuine agents with beliefs and desires. This paper demonstrates an in principle limit to LLM agency, based on their architecture. LLMs are next word predictors: given a string of text, they calculate the probability that various words can come next. LLMs produce outputs that reflect these probabilities. I show that next word predictors are exploitable. If LLMs are prompted to make probabilistic predictions about the world, these predictions are guaranteed to be incoherent, and so Dutch bookable. If LLMs are prompted to make choices over actions, their preferences are guaranteed to be intransitive, and so money pumpable. In short, the problem is that selecting an action based on its potential value is structurally different then selecting the description of an action that is most likely given a prompt: probability cannot be forced into the shape of expected value. The in principle exploitability of LLMs raises doubts about how agential they can become. This exploitability also offers an opportunity for humanity to safely control such AI systems.