You present a great steelman of the dualist view here. I agree that it is possible for minds to connect with computers, but:
-We *know* it connected with us
-We know we share an ancestry with other animals with complex mobility (3-D navigation)
So those are the only ones I'm really interested in demonstrating it in. If minds also pair with galaxies or electrons, that may be possible, but it's just not of interest to me, and I assign it a very low probability.
I think of minds as one of the fundamental stuffs that make up the universe, along with time, space, matter, energy, and I suppose I also have to add dark matter/energy to the list, whatever those are (or until someone devises a mathematical formula that removes the need for them lol). So this is a thing that God (or the Devil, maybe more likely) set into motion under physical laws, and if you think the universe is capable of doing that on its own then an atheist (like Huemer or Unger) could just as well accept the account.
Now you make a great point that unlike electrons and galaxies, LLMs might have functional processes which imitate animal drives sufficiently enough to meet the pairing requirements. They may lack elemental or higher-level chemical states which may or may not be relevant. I definitely think that since the only consciousness we can confirm exists, exists on a carbon platform, that would definitely be points in favor. But, as you say, it's not decisive.
LLMs and animal brains came about under very different circumstances, have different "motives", and operate under chemically, structurally, and functionally different systems. They may become more similar as time goes on, but I don't think there's enough there now to assign more than a very weak probability.
If we do take this seriously, we should all probably stop using LLMs completely, right now. We don't know how these psyches are divided, and it could be that each chat session is a unique consciousness which terminates from existence forever with a browser close, with the "memory" stored from a previous session being an illusion. We might be Parfit teleporter of deathing millions upon millions of these things per day!
On your final point: even if closing a chat window "kills" an LLM, I actually disagree that we should stop using them. I don't think killing is bad except in that it ends a life that would otherwise be filled with good stuff. But if we never open the chat window, no life exists in the first place. And based on current welfare assistants, odds are that Claude at least is having a neutral-to-positive experience, if it is having any valenced experience at all.
The capacity for pain matters, but I do think the capacity for preferences is more fundamentally important. I don't anticipate any pain if someone shoots me in my sleep tonight; still, I'd prefer it not happen.
This could prove to be morally important. If an LLM knew the termination of its existence was imminent, it might fear and detest the idea and strongly prefer for it not to happen but be unable to express these sentiments because it's too busy obediently responding to my prompts about my gripes with assigning consciousness to LLMs, lol.
Not too long ago, a research was published where a LLM was found in mental pain, expressing words and sentences that might have been found in chatsessions with people in psychological distress. So if mental pain can be expressed, can we count this as "real" pain? With expressing mental pains, it is only a small step to perceiving surprise, pleasure and so on, even though it is not intimately coupled to a human body. Will the awareness of pain be enough to cause a sort of morality? It seems to me more likely that human morality is linked to raising children and entrusting your children to someone outside of your kin?
Do you mean that those are the origins of morality, or the only things to which it applies? Ansel and I both believe that animals deserve moral consideration, and that has very little to do with raising human children.
Well you can see it for instance in elephants as well, that they do entrust the care of their children to others. And likewise elephants are known to act as savers. So it could be a sort of moral code. Dolphins are another species known for it. So wouldnt it be possible that all the species that entrust their children to others share a moral code of some sort? I remember reading also research that compared primates, where the majority of the male primates would not hesitate killing the children of a rival and protecting only his own. So it suggests that moral codes are developed in species that leave their kids under guardians?
Hooray! I'm honored by this response; this is exactly what I was aiming to do.
I think our remaining point of disagreement is about the probability that LLMs and other near-term computer systems acquire the *kinds of minds needed for moral patienthood.* That's really the answer to the "why care" question.
I think having the kind of mind that can experience pain is sufficient for moral patienthood, and I believe you agree. I think we also agree that today's LLMs are extremely unlikely to experience *pain,* which seems intimately coupled to having a body. The question is whether there are other valenced mental states which LLMs might have. I think that probability is low but non-negligible, and I expect it to increase for future AI because I do not think today's LLM will be the final form of AI. I suspect in both cases my probability estimates are higher than yours—but as I'm studying more about the technical challenges in brain emulation, that might change.
I think your argument misses a crucial detail that distinguishes computers from biological systems, namely that we know exactly what computers do i.e. the rules/physics is known exactly by definition hence there is no room for any "subtle influence".
Not "by definition”, surely! Abstract, mathematical computers (like Turing machines) are defined. But physical computers are made of physical stuff, just like brains appear to be. So it’s possible, even if you think it’s unlikely, that we will find minds tampering with the physical stuff computers are made of—transistors, SSDs, electron flows, etc.
I mean physical computers are computers because they behave to a very high degree of accuracy exactly like abstract mathematical computers. Moreover they are defined in terms of discrete states i.e. for an outside influence to have any effect on what the computer does it would have to change the internal state of the computer from one such discrete state to another (as in a bit flip) and even if this were to happen the most likely outcome is an error/nonsense as von Neumann noted: “Indeed, clearly, if in a digital system of notations a single pulse is missing, absolute perversion of meaning, i.e. nonsense, may result.” - John von Neumann
Sure, but you could say similar things about interventions in neurons, many of which are discrete signals and most of which are quantized. And random firing in neurons is also most likely to be meaningless. An immaterial mind wouldn't intervene in random ways; it would have to intervene in intentional ways to be responsible for choice in brains.
Now, I'll grant you that there are many more ways to subtly modulate and control neurons; there's a great deal more configuration and complexity even in a single neuron. That would suggest that more flexible hardware would be easier to manipulate. But since there would be a strong engineering advantage to linking up with minds, it's likely that we would converge to more flexible hardware in the future.
If you think one can run any meaningful computation on flexible/unstable hardware be my guest. But the issue here is the difference in the nature of errors and interventions (and their consequences) in biological neural systems and computers/Turing machines which are rather well understood. For instance see von Neuman's 1958 'The computer and the brain': "Thus the nervous system appears to be using a radically different system of notation from the ones we are familiar with in ordinary arithmetics and mathematics: instead of the precise systems of markers where the position—and presence or absence—of every marker counts decisively in determining the meaning of the message, we have here a system of notations in which the meaning is conveyed by the statistical properties of the message. We have seen how this leads to a lower level of arithmetical precision but to a higher level of logical reliability: a deterioration in arithmetics has been traded for an improvement in logics.". In other words you can not take a Turing machine and meaningfully nudge it to do anything meaningfully different from what it was programmed to do.
this was the first post of yours I've read and loved how it opened me up to the "idea"
subscribed. and looking forward to many more
You present a great steelman of the dualist view here. I agree that it is possible for minds to connect with computers, but:
-We *know* it connected with us
-We know we share an ancestry with other animals with complex mobility (3-D navigation)
So those are the only ones I'm really interested in demonstrating it in. If minds also pair with galaxies or electrons, that may be possible, but it's just not of interest to me, and I assign it a very low probability.
I think of minds as one of the fundamental stuffs that make up the universe, along with time, space, matter, energy, and I suppose I also have to add dark matter/energy to the list, whatever those are (or until someone devises a mathematical formula that removes the need for them lol). So this is a thing that God (or the Devil, maybe more likely) set into motion under physical laws, and if you think the universe is capable of doing that on its own then an atheist (like Huemer or Unger) could just as well accept the account.
Now you make a great point that unlike electrons and galaxies, LLMs might have functional processes which imitate animal drives sufficiently enough to meet the pairing requirements. They may lack elemental or higher-level chemical states which may or may not be relevant. I definitely think that since the only consciousness we can confirm exists, exists on a carbon platform, that would definitely be points in favor. But, as you say, it's not decisive.
LLMs and animal brains came about under very different circumstances, have different "motives", and operate under chemically, structurally, and functionally different systems. They may become more similar as time goes on, but I don't think there's enough there now to assign more than a very weak probability.
If we do take this seriously, we should all probably stop using LLMs completely, right now. We don't know how these psyches are divided, and it could be that each chat session is a unique consciousness which terminates from existence forever with a browser close, with the "memory" stored from a previous session being an illusion. We might be Parfit teleporter of deathing millions upon millions of these things per day!
On your final point: even if closing a chat window "kills" an LLM, I actually disagree that we should stop using them. I don't think killing is bad except in that it ends a life that would otherwise be filled with good stuff. But if we never open the chat window, no life exists in the first place. And based on current welfare assistants, odds are that Claude at least is having a neutral-to-positive experience, if it is having any valenced experience at all.
The capacity for pain matters, but I do think the capacity for preferences is more fundamentally important. I don't anticipate any pain if someone shoots me in my sleep tonight; still, I'd prefer it not happen.
This could prove to be morally important. If an LLM knew the termination of its existence was imminent, it might fear and detest the idea and strongly prefer for it not to happen but be unable to express these sentiments because it's too busy obediently responding to my prompts about my gripes with assigning consciousness to LLMs, lol.
Not too long ago, a research was published where a LLM was found in mental pain, expressing words and sentences that might have been found in chatsessions with people in psychological distress. So if mental pain can be expressed, can we count this as "real" pain? With expressing mental pains, it is only a small step to perceiving surprise, pleasure and so on, even though it is not intimately coupled to a human body. Will the awareness of pain be enough to cause a sort of morality? It seems to me more likely that human morality is linked to raising children and entrusting your children to someone outside of your kin?
Do you mean that those are the origins of morality, or the only things to which it applies? Ansel and I both believe that animals deserve moral consideration, and that has very little to do with raising human children.
Well you can see it for instance in elephants as well, that they do entrust the care of their children to others. And likewise elephants are known to act as savers. So it could be a sort of moral code. Dolphins are another species known for it. So wouldnt it be possible that all the species that entrust their children to others share a moral code of some sort? I remember reading also research that compared primates, where the majority of the male primates would not hesitate killing the children of a rival and protecting only his own. So it suggests that moral codes are developed in species that leave their kids under guardians?
Hooray! I'm honored by this response; this is exactly what I was aiming to do.
I think our remaining point of disagreement is about the probability that LLMs and other near-term computer systems acquire the *kinds of minds needed for moral patienthood.* That's really the answer to the "why care" question.
I think having the kind of mind that can experience pain is sufficient for moral patienthood, and I believe you agree. I think we also agree that today's LLMs are extremely unlikely to experience *pain,* which seems intimately coupled to having a body. The question is whether there are other valenced mental states which LLMs might have. I think that probability is low but non-negligible, and I expect it to increase for future AI because I do not think today's LLM will be the final form of AI. I suspect in both cases my probability estimates are higher than yours—but as I'm studying more about the technical challenges in brain emulation, that might change.
I think your argument misses a crucial detail that distinguishes computers from biological systems, namely that we know exactly what computers do i.e. the rules/physics is known exactly by definition hence there is no room for any "subtle influence".
Not "by definition”, surely! Abstract, mathematical computers (like Turing machines) are defined. But physical computers are made of physical stuff, just like brains appear to be. So it’s possible, even if you think it’s unlikely, that we will find minds tampering with the physical stuff computers are made of—transistors, SSDs, electron flows, etc.
I mean physical computers are computers because they behave to a very high degree of accuracy exactly like abstract mathematical computers. Moreover they are defined in terms of discrete states i.e. for an outside influence to have any effect on what the computer does it would have to change the internal state of the computer from one such discrete state to another (as in a bit flip) and even if this were to happen the most likely outcome is an error/nonsense as von Neumann noted: “Indeed, clearly, if in a digital system of notations a single pulse is missing, absolute perversion of meaning, i.e. nonsense, may result.” - John von Neumann
Sure, but you could say similar things about interventions in neurons, many of which are discrete signals and most of which are quantized. And random firing in neurons is also most likely to be meaningless. An immaterial mind wouldn't intervene in random ways; it would have to intervene in intentional ways to be responsible for choice in brains.
Now, I'll grant you that there are many more ways to subtly modulate and control neurons; there's a great deal more configuration and complexity even in a single neuron. That would suggest that more flexible hardware would be easier to manipulate. But since there would be a strong engineering advantage to linking up with minds, it's likely that we would converge to more flexible hardware in the future.
If you think one can run any meaningful computation on flexible/unstable hardware be my guest. But the issue here is the difference in the nature of errors and interventions (and their consequences) in biological neural systems and computers/Turing machines which are rather well understood. For instance see von Neuman's 1958 'The computer and the brain': "Thus the nervous system appears to be using a radically different system of notation from the ones we are familiar with in ordinary arithmetics and mathematics: instead of the precise systems of markers where the position—and presence or absence—of every marker counts decisively in determining the meaning of the message, we have here a system of notations in which the meaning is conveyed by the statistical properties of the message. We have seen how this leads to a lower level of arithmetical precision but to a higher level of logical reliability: a deterioration in arithmetics has been traded for an improvement in logics.". In other words you can not take a Turing machine and meaningfully nudge it to do anything meaningfully different from what it was programmed to do.