It probably doesn’t matter whether software can be made sentient

Abstract:

Around 40% of philosophers believe that future AI systems will be conscious (Bourget and Chalmers 2023). Others have argued that the realization of phenomenal consciousness in conventional computer hardware is infeasible, nomologically impossible, or even metaphysically impossible (Block 2009; Godfrey-Smith 2016; Searle 1992; Shiller 2024; Tononi and Koch 2015).

Whether or not the AI systems we build in future really are conscious is typically assumed to matter a great deal to how we ought to treat them, due to the significance of phenomenal consciousness for moral standing. I argue that this assumption is most likely false: if physicalism is the correct metaphysics of mind – as most philosophers believe (Bourget and Chalmers 2023) – then it does not matter ethically who is in the right in philosophical disagreements about whether consciousness can be realized in conventional computer hardware (compare Lee 2019, Papineau forthcoming).

My argument is as follows. If physicalism is a shared premise among the parties to the disagreement, then their disagreement must be purely semantic, in roughly the following sense: it pertains not to what the world is like, but to what aspects of the world we refer to via our concepts. However, the resolution of a semantic disagreement of this kind does not bear on whether any entity has moral standing.

To see why we should accept the first premise, I ask us to imagine that Alice and Bob are physicalists who confront a computer system, S. S has conventional hardware and exhibits certain high-level and/or coarse-grained computational properties that suffice for phenomenal consciousness according to Alice, but Bob denies that S exhibits phenomenal consciousness, whether because S is inorganic or because it does not mimic certain low-level and/or fine-grained functional properties of the human central nervous system or for some other reason.

We may assume that Alice and Bob agree on S’s material composition and functional organization, as well as how S’s material composition and functional organization are like and unlike conscious neural systems. For any property, when conceived via a physical-functional concept, P, it seems that Alice and Bob will agree in accepting or rejecting the proposition S EXHIBITS P. Moreover, both agree that S does not exhibit any irreducibly non- physical properties that cannot be referred to via some physical-functional concept. Therefore, it seems, their disagreement must ultimately reduce to the fact that Alice believes that some phenomenal concept co-refers with some physical-functional concept that they both agree picks out a property of S, whereas Bob denies this. Stated otherwise, their disagreement is semantic (compare Levine 2001: 89). We might say that it is not about how things stand with S, but how our concepts relate to how things stand with S.

I consider a number of alternative candidates for what might be at issue in the disagreement, including the possibility that the disagreement could be understood as concerned with whether S exhibits some metaphysically emergent property or with the naturalness of the properties shared between S and conscious neural systems, as well as that it should be understood as a form of irreducibly moral disagreement. I argue that none of these alternatives should convince us to reject my argument.

Although I claim that the disagreement between Alice and Bob is semantic, I do not suggest that there is no fact of the matter (contrast Papineau 2002, forthcoming). For all I’ve said, the semantic facts may entail that some phenomenal concept does in fact co-refer with some physical-functional concept that Alice and Bob both know to pick out a property of S, or that no phenomenal concept so refers. Thus, I allow for the possibility that Alice may be in the right and Bob in the wrong or vice versa. Nonetheless, I claim that, because the disagreement between Alice and Bob is merely semantic, who is in the right cannot bear on whether S has moral standing.

While I take this premise to be intuitively compelling and observe that most people find it to be so, I also suggest that it can be bolstered by appeal to the idea that what it is for an individual to have moral standing is for them to matter morally in their own right and for their own sake (Kamm 2007: 227-30). This, I argue, precludes the possibility that whether an individual has moral standing depends on facts about how our concepts do or do not refer to the properties they exhibit.

In the final part of the paper, I discuss the practical significance of the argument, as well as the extent to which it can be generalized to other disagreements about the distribution of phenomenal consciousness.

Scroll to Top