Computers will have souls
Don't rule it out, at least
I spend a lot of time arguing that LLMs could be thinking, either in a psychological sense (performing all the cognitive functions of thinking) or in a phenomenal sense (having subjective experiences of thinking).
As someone who leans towards illusionism,1 I’m inclined to think that psychological thinking = phenomenal thinking. But even if dualism is true, I think LLMs or future AI will likely have phenomenal experiences too. That is, if immaterial minds (or “souls”) exist and somehow attach to brains, then I expect them to attach to computers as well. Here’s why.
An evolutionary breakthrough
I’m going to assume that human beings, like animals, evolved via something like natural selection. Atheist dualists should have no problem with this. Some theists will reject this, but not all—evolution is accepted in Catholicism, for instance. Perhaps some intelligent creator set up the laws of nature such that human beings would evolve in his image, or perhaps he even made little nudges here and there to make sure the right chance mutations would occur. Either way, I’ll assume humans and animals originally developed out of chemical stuff and took the form they have today through a gradual accumulation of heritable changes.
According to the dualist, somewhere along this evolutionary process something amazing happened: some animals acquired minds. Minds do not consist in matter but in mental stuff: qualitative thoughts, experiences, feelings, and reasons. However, minds and matter can interact with each other transmitting information from one type of substance to the other. The hub of this interaction, science tells us, is the brain.2 All kinds of material information—optical inputs, gastrointestinal state, body position, nociception—is relayed to and represented in the brain. This influences the mind, creating phenomenal experiences of color, hunger, vertigo, and pain. These experiences interact with one another and trigger movements, thoughts, and decisions, which have to get sent back down to the brain to make limbs move, vocal cords vibrate, and eyeballs scan the scene.
There are three important principles we can derive from this story:
Subtle influence. Minds can act on matter that appears very mechanical and “dumb” at a microscopic level. Each neuron, while being marvelously complex, appears to act according to simple physical inputs: ion channels opening, vesicles launching, voltage spreading. According to the dualist, our labs will eventually demonstrate that neurons in a human brain don’t act purely mechanistically—otherwise, the mind wouldn’t be doing anything—but the ways in which minds materially influence neurons are currently too subtle to detect, so we end up with a bunch of dumb-looking neurons and materialists leaping to say “it’s all dumb material!”
Evolutionary discovery. Minds are discoverable via material evolution. That is, dumb matter in the right configuration was able to link up with sophisticated mental stuff through a process of random genetic accumulation (perhaps nudged by an intelligent designer) under a blind selective process. If human biologists were to gather the right chemical precursors to evolve life in a flask, if they let the right evolutionary pressures run long enough (or gave the right experimental nudges) they could create something with a mind.
Cognitive usefulness. It is very useful, from an evolutionary standpoint, to have a mind. Minds let you think and feel, which are very useful for making intelligent choices. The planet is dominated by the animal with the most sophisticated mind. The fact that humans evolved with minds and not as zombies is strong evidence that either (a) zombies are practically impossible in our world; you need a mind to get sophisticated thinking done, or (b) zombies are physically possible, but it is much easier to evolve a brain which links to a mind than a brain which operates purely mechanically. Pairing cognitive usefulness with evolutionary discovery, we should expect that dumb search algorithms like natural selection, or at least their intelligently-guided sisters, are more likely to produce animals with minds than zombie animals.
Keep these three principles in mind (ha!); we’ll need them shortly. First, a brief investigation of mind-brain interaction.
What’s behind the mind-brain link?
We don’t know exactly what it is about brains that allows them to interface with minds. But we can make some guesses.
Notice that the contents of mental experiences are tightly bound up with the functional states of the brain, but don’t seem to be bound up with their physical substrates. When mid-wavelength light bounces off a Granny Smith apple, gets absorbed by our retinal cones, and a signal is sent to visual cortex, we get an experience of green, corresponding to (but not constituted by) a functional state represented in the brain. Notice that we do not get an experience of—we are not conscious of—cyclic GMP cleaving, neurotransmitters firing, voltage accumulating, new synaptic spines being formed. Similarly, when we make conscious decisions, we do not think “I am going to nudge this neuron, release this neurotransmitter, trigger this action potential, etc.” We think about taking action at the level of functions—speaking, writing, focusing.
A priori this is not obvious. It could have been that our mental experiences correspond to individual and granular microphysical events in our brains, rather than the higher-level functional events. So this observation provides evidence that functional events might be more significant to conscious experience than the microphysical events.
This evidence supports the view that minds could attach to and interact with material stuff other than carbon-based neurons using chemical neurotransmission, which I’ll call weak substrate neutrality.3 At this point, do we have any decisive arguments against weak substrate neutrality?
One argument I’ve encountered is inductive: all of the minds we’ve observed so far are attached to brains made of pretty similar biological materials. It’s therefore reasonable to suspect those particular materials might be required for mind-attachment. At minimum, this suggests that it was evolutionarily easier to make a mind-vessel out of biological stuff. That might just be because of what materials are convenient for natural selection on Earth to work with, or it might reflect a closer relationship. But this simple induction is certainly not decisive. After all, for thousands of years the only producers of coherent, novel, and informative language were those with brains; now, we have LLMs.
Furthermore, while it is easy to see parallels between physical and mental information processing, it seems difficult to pick out non-functional, microphysical features of neurons which seem like they’d be important for mind attachment. A statue of Madonna is a statue of Madonna, whether it is made of wood, steel, snow, or glass. The only differences between these statues are properties that can be reduced to their microphysical composition: melting point, tensile strength, water resistance, etc. So, please explain: why would the mind require a cell membrane made out of phospholipids in particular? Would some other material which separates sodium and potassium ions just as well work? Is it necessary to separate ions, or would gradients of heat or water pressure do? Why? Why not?
My conclusion is that we should be uncertain about the physics of the mind-brain link, and we should take weak substrate neutrality to be a serious possibility.
Computers and minds
So far I’ve argued that non-biological stuff could acquire minds. Now, let’s turn to why I think computers are likely to acquire minds, given weak substrate neutrality.
First, the principle of subtle influence. Just because computers look like all their operations consist in dumb, material events doesn’t mean that computers of sufficient size will remain that way—after all, neurons look dumb and material, but brains have all kinds of mind-interaction capabilities! The idea that computers are just piles of binary switches shouldn’t hold any more sway than the idea that brains are just piles of binary action potentials. If weak substrate neutrality is true, there’s no particular reason why the computers of the future couldn’t have subtle interactions with the mind.
Second, the principle of evolutionary discovery. Perhaps today’s computers really don’t have the right stuff, either on a software or a hardware level, to acquire minds. However, it is possible for dumb matter, through a blind evolutionary process, to acquire minds given the right evolutionary pressures. As humans, we can set up evolutionary algorithms and selective processes, and we can even play the role of intelligent nudges to the system if we need to. We’ve seen remarkable progress towards intelligence via the search pressures of gradient descent for software, and it didn’t take us hundreds of thousands of years to get language this time.
Third, the principle of cognitive usefulness. If it is advantageous to have a mind, if acquiring a mind is a good and efficient solution to many problems, then humans, design algorithms, and semi-intelligent AI will be highly “motivated” to pursue it. We can turn the inductive argument on its head: the only successful thinkers, planners, and introspectors that we are aware of are those with minds. If AIs start to show strong external signs of thinking, planning, and introspection, then that is evidence to suggest that they do in fact have minds. Natural selection does not optimize for having rich, internal qualitative experiences; it optimizes for making babies, and all of the myriad cognitive problems involved in staying alive long enough to make babies. It therefore seems like acquiring a mind is an essential, or at least extremely helpful, way of solving those problems, so if you see something that’s really good at cognition, it’s reasonable to think it might have a mind.4
All I ask is your uncertainty
These principles aren’t decisive. There are many discoveries we could make which might limit the possibility of artificial sentience. But at the very least, they should ground uncertainty as to whether machines could acquire minds.
I try to stay level-headed, but this really frustrates me when interacting with some dualists on this platform.5 I am sympathetic to dualism and assign it a non-trivial probability of being true, precisely because consciousness seems like a really weird phenomenon, unlike most of the material questions of everyday science. That should make dualists more inclined to believe that machines could acquire minds—because consciousness is weird! It does weird things! If you believe experience is intrinsic and only knowable from the first person, that I can’t ever really know if your qualia are inverted or if you are a zombie, then why should you be so darn certain that a computer couldn’t have qualia, maybe some exotic bizarro qualia you haven’t even conceived of? What on Earth could justify your confidence that AI absolutely doesn’t have any kind of mind?
If there’s even a 10% chance that computers will acquire minds, that would have huge impacts on the future. So we need to check our assumptions, proceed cautiously, and not prematurely rule out the possibilities of mind-matter interaction just because they don’t occur to us at first. The future may well depend on it!
Here’s one of the reasons I lean that way, and here’s why I am still unsure about it. My other reasons for leaning towards illusionism are mostly due to Dennett, Frankish, and Kammerer.
Different dualists will disagree about when animals acquired minds and what kind of brain or brain-like structure is necessary. Some might think that only the human brain gets linked to a mind, while others think that even primitive nervous systems get linked to minds, primitive minds though they be. Still others will think that there is no moment at which animals suddenly acquired minds, but that all matter is linked to some kind of mental stuff and the accumulation of that mental stuff into a sophisticated mind was a gradual process. All of these perspectives should be compatible with my argument; you just might have to swap out a few phrases and make the right mental substitutions.
Strong substrate neutrality would be the view that minds can attach to any physical material stuff, so long as it has the right organization. If I were a dualist, I would probably believe in strong substrate neutrality, but it’s not necessary for the present argument.
What about guided creation? Well, don’t forget that humans are guiding the creation of new AI hardware & software! There will be great demand for agents who think like humans, and as our understanding of brains grows we should be able to nudge computers in the same direction.
Not all dualists make this mistake. David Chalmers thinks computers could have minds. So does your friendly theistic dualist Bentham’s Bulldog. Neither is confident that the LLMs of today are conscious, but they see no strong principle which proves that future AI never, ever will be.







this was the first post of yours I've read and loved how it opened me up to the "idea"
subscribed. and looking forward to many more
You present a great steelman of the dualist view here. I agree that it is possible for minds to connect with computers, but:
-We *know* it connected with us
-We know we share an ancestry with other animals with complex mobility (3-D navigation)
So those are the only ones I'm really interested in demonstrating it in. If minds also pair with galaxies or electrons, that may be possible, but it's just not of interest to me, and I assign it a very low probability.
I think of minds as one of the fundamental stuffs that make up the universe, along with time, space, matter, energy, and I suppose I also have to add dark matter/energy to the list, whatever those are (or until someone devises a mathematical formula that removes the need for them lol). So this is a thing that God (or the Devil, maybe more likely) set into motion under physical laws, and if you think the universe is capable of doing that on its own then an atheist (like Huemer or Unger) could just as well accept the account.
Now you make a great point that unlike electrons and galaxies, LLMs might have functional processes which imitate animal drives sufficiently enough to meet the pairing requirements. They may lack elemental or higher-level chemical states which may or may not be relevant. I definitely think that since the only consciousness we can confirm exists, exists on a carbon platform, that would definitely be points in favor. But, as you say, it's not decisive.
LLMs and animal brains came about under very different circumstances, have different "motives", and operate under chemically, structurally, and functionally different systems. They may become more similar as time goes on, but I don't think there's enough there now to assign more than a very weak probability.
If we do take this seriously, we should all probably stop using LLMs completely, right now. We don't know how these psyches are divided, and it could be that each chat session is a unique consciousness which terminates from existence forever with a browser close, with the "memory" stored from a previous session being an illusion. We might be Parfit teleporter of deathing millions upon millions of these things per day!