Abstract:
Since the 1960s, philosophers have seriously considered the possibility that psychological states could be multiply realizable in not only in humans and other terrestrial creatures, but also in potential alien and artificial cognizers. For many it has become an accepted constraint on our theories of the nature of perception and cognition that they would accommodate these as-yet-undiscovered psychological beings. In our work, we have resisted this demand by arguing that the present state of evidence and theorizing about cognition does not support the expectation that cognition essentially has any features that would require us to endorse multiple realizability as a constraint. Our approach nevertheless leaves it open that future discoveries or inventions would require us to reassess. Might the advent of LLMs be just such a revolution? We argue that it is not. The capabilities of current LLMs are well within the bounds of what past theorists anticipated, and have in fact arrived rather tardy by the expectations of the late 1960s and early 1970s despite the fact that computing power is exponentially greater than was predicted. It was precisely in recognition that such systems would be possible that theorists proposed criteria more demanding than linguistic behavior. In particular, it has been generally agreed that attribution of cognitive capacities is in a sense holistic such that cognizing requires the satisfaction of an overall psychological theory. And while there are disputes about exactly what is included in such a theory, there is broad agreement about the general shape of such a theory. Here we argue that LLMs, impressive as they may be, do not yet realize a full psychological theory. More important than that assessment, however, is the explication of the general metric that requires us to consider more than just impressive communicative behaviors.