
1. Acausal Control
Joe Carlsmith of Open Philanthropy did some great writing on twin prisoners’ dilemmas (twin PDs) where both you and your twin are deterministic AI. One of his main remarks was the eerie feeling of control you can have over your twin, even if they are light-years away:
Because absent some kind of computer malfunction, both of you will make the same choice, as a matter of logical necessity. If you press the defect button, so will he; if you cooperate, so will he. … for all intents and purposes, you control what he does. Imagine, for example, that you want to get something written on his whiteboard: let’s say, the words “I am the egg man; you are the walrus.” What to do? Just write it on your own whiteboard. Go ahead, try it. It will really work. When you two rendezvous after this is all over, his whiteboard will bear the words you chose. In this sense, your whiteboard is a strange kind of portal; a slate via which you can etch your choices into his far-away world; a chance to act, spookily, at a distance.1
Note that this doesn’t violate the impossibility of faster-than-light communication, since everything you will do has already been encoded in both locations in the form of the copy and its input, so you’re not sending anything novel. We might take this as interesting food for thought about what illusions of control you can have in a deterministic world. But Carlsmith takes things further! He is a compatibilist about free will, which is the philosophical stance that free will is compatible with a deterministic world. For instance, a compatibilist might believe that what “free will” really means is being able to consciously deliberate and select your actions, then act without being inhibited by a force outside you. Yes, everything that you do is causally determined by the laws of physics in your brain, but the physical system of your brain and body constitutes you. So as long as it’s the you-system doing the choosing, who cares if the underlying machinery is all physics?
The above is only an example of how a compatibilist might think. Maybe Carlsmith’s compatibilism is more subtle or nuanced than this—he doesn’t go into it in the article—but I don’t think this is that strange of a position, even if false. But whatever his compatibilist flavor, it leads him to a peculiar position for the twin PD: he thinks you are able to meaningfully “control” your twin, without having any causal power over them!
You stand in front of your whiteboard, and it is genuinely up to you what you write, or do. You can write “I am a little lollypop, booka booka boo.” You can draw a demon kitten eating a windmill. You can scream, and dance, and wave your arms around, however you damn well please. Feel the wind on your face, cowboy: this is liberty. And yet, he will do the same. And yet, you two will always move in unison.
We can think of the magic, here, as arising centrally because compatibilism about free will is true. Let’s say you got copied on Monday, and it’s Friday, now – the day both copies will choose. On Monday, there was already an answer as to what button you and your copy will press, given exposure to the Friday inputs. Maybe we haven’t computed the answer yet (or maybe we have); but regardless, it’s fixed: we just need to crunch the numbers, run the deterministic code. From this sort of pre-determination comes a classic argument against free will: if the past and the physical laws (or their computational analogs, e.g. your state on Monday, and the rest of the code that will be run on Friday) are only compatible with your performing one of (a) or (b), then you can’t be free to choose either, because this would imply that you are free to choose the past/or the physical laws, which you can’t. Here, though, we pull a “one person’s reductio is another’s discovery”: because only one of (a) or (b) is compatible with the past/the physical laws, and because you are free to choose (a) or (b), it turns out that in some sense, you’re free to choose the past/the physical laws (or, their computational analogs).2
Wild stuff. But not necessarily wrong. I have more reading to do about compatibilism, but I think if my readers and I believe in (or are sympathetic to) one-boxing in Newcomb’s problem, we shouldn’t leap to accuse Carlsmith of magical thinking. It doesn’t seem like he harbors any delusions about what is physically happening: a deterministic algorithm is being run twice on the same input, with no communication between the two instances. We just might disagree on what we call “control.”
What I’d like to spend a longer time discussing, which Carlsmith only briefly touches on in his essay, is: if it makes sense to say that you are executing control over your twin, it should make equal sense to say your twin is executing control over you. After all, the exact same reasoning applies to their position; in fact, their position might be intuitively stronger if they are the one in the past, if you want to hold to the believe that future events can’t control past ones. But then, if someone who is not you is executing control over you—perfect control—how is your action free? Let’s try to stick to compatibilism and figure out the implications.
2. Identity Argument
Argument: this other person is you. They are stipulated to be an exact clone of you, so it’s not the case that one of you is exerting control over the other; it’s only you choosing.
Response: it may seem that your twin is qualitatively identical to you—possessing all the same properties as you. But they are certainly not numerically identical. If you prick your twin, you do not start to bleed. That statement is true, and it can only be true if you ≠ your twin. So it’s contentious out the gate whether they constitute you.
Further, your twin may not be qualitatively identical to you, depending on what you consider the relevant properties to be. For instance, you occupy different positions in space. One of you was created from a cloning machine, while the other was not. Or imagine we randomly put one of you in a room with a blue button and the other in a room with a red button, but both of you were told that the room your twin was in was identical. Then your thought processes would still be nearly identical, but you and your twin would not be having the same sensory experience. So it’s not the case that you would be experiencing the same “stream of consciousness,” if you wanted to go that route.
3. Relativity Argument
Argument: perhaps compatibilist free will is relative. It is true for me that I am controlling my twin, and it is not true for me that they are controlling me. Similarly, it is true for my twin that they are controlling me, and it is not true for my twin that I am controlling them.
Response: it’s difficult to see what would justify this asymmetry, or what it would really mean. Compatibilism is not the position that we don’t really have free will but each of us feels like we do. And agent-relative statements should be able to be put into a third-person perspective without becoming contradictory. For instance, it is not contradictory to say that “Winston believes that 2 + 2 = 4 and O’Brien believes 2 + 2 = 5,” even though it would be contradictory to assert “2 + 2 = 4 and 2 + 2 = 5.” However, compatibilist free will requires you to make statements not just about your own state, but also about the state of others. For instance, suppose an evil scientist implanted electrodes in your brain that let her remote control every action you took, but gave you the powerful sense that you were in control of your own actions, generalizing a rationalization for each action. So your actions aren’t free. Now suppose you instead were handed the remote control and controlled yourself through the interface. Now your actions do seem to be free. The only thing that’s changed is whether or not another person has control over you. So you want the statement “The mad scientist is in control of me” to conflict with “I am in control of myself.” I suppose you could add additional relativist qualifiers, but they feel like an ad-hoc solution at best.
4. Partial Control Argument
Argument: free will isn’t black-and-white: if you’re mildly drunk, you have partial but not total control of your actions. You are partially controlling, partially controlled by your twin.
Response: you’re a deterministic agent, going about your day and making choices—“hmm, should I make pasta or falafel for dinner? I think I’ll make falafel.” Two weeks later, a deterministic copy of you is created in the exact same scenario, and makes the exact same choice—“hmm, should I make pasta or falafel for dinner? I think I’ll make falafel.” So now that this copy of you has been created, suddenly, retroactively, the choice you made was no longer as free. This clone has—partial!—control over the action that you already took. This is quite strange, but maybe not prima facie damning: after all, we’re already accepting the premise of retroactive non-causal control, so maybe this isn’t so weird? The real problem is that this doesn’t square with the compatibilist approach to degrees of free will at all. Under compatibilism, you are free to the degree that your choice is a product of your uninhibited ability to reflect, feel, and act the way you want to act, and this is meant to be compatible with determinism. Creating a perfect simulation of you isn’t supposed to mean you have any less free will. When a clone is created, you have no less capacity to reflect, feel, and act the way you want to. That’s what “control” means to a compatibilist. So I don’t think you lose any control when a twin is created.
5. Shared Will Argument
Argument: you and your clone are not the same person, but you share the same will. This will is totally free (in a compatibilist sense), and it is both your will and your clone’s will. Thus, it is not a matter of one will controlling another.
Response: I like this one.
Instinctively, it has an FDT-like appeal. You and your clone are both executing the same algorithm—turning percepts, feelings, ideas, reasoning into a choice of action. That algorithm, not the particular transistors and/or neurons that are computing it, is what your will is, just as Tolstoy’s War and Peace is a story that is written in many books, even if my paperback and your hardcover are not numerically or qualitatively identical. That algorithm, that will, runs freely in both you and your twin’s mind. Does this mean you are being hijacked, controlled by some abstract other thing that is not yours? No! Because it really is your will.
Imagine a science-fiction hivemind occupying a dozen robots, controlling each of them like we control our fingers. The robots are not independent of each other, but it would be a mistake to say that the hivemind itself does not have free will, even if each individual robot is experiencing different things and occupies a different point in space. Similarly, I am suggesting that your body and your twin’s body both have the same will, and that will is free. So you are free, and your clone is free! Your brain’s capacity to control your hand does not make the movement of your hand unfree. Under compatibilism, that capacity for conscious control is what it means to be free.
Well, that’s cool and all. Of course this view has major issues too:
Similarity comes in degrees. Suppose your clone is not exactly like you, and was created by a predictive algorithm that makes a very very very good approximation of what you’ll do. For simple actions, you are still in lockstep, but you’ll decohere after a timescale of a few days. So your will isn’t the same, but aren’t you controlling each other to some degree?
Suppose we return to the evil scientist mind-control example, only in this case, the person with the remote is your clone. Does it make sense to say that your will is free in this scenario, even though you would have controlled you clone to do the exact same thing had it been you who had the remote? What if you are “indexically selfish” (meaning, you don’t care about your clone’s wellbeing, since it isn’t numerically identical to yours), meaning your clone is too, so your clone mentally tortures you for profit?
Suppose you have not a twin, but a bizarro doppelgänger, who does the exact opposite of what you do in any given situation. Now it seems you control them, and they control you, but you clearly do not have the same will. You could argue that their will is dependent on yours, but by the same token, they could reason that your will is clearly dependent on theirs, since you are their bizarro doppelgänger as well. So the problem is only pushed up one level of abstraction, not resolved.
Claude asks: if freedom is a property of will and not person, then if your clone commits a crime, are you responsible for it? Definitely not if your clone has changed significantly from you, but what if you really would have done the exact same thing if you were placed in that situation? Would it be just to punish you? What if it was only a simulation of you, and nobody was actually hurt? I’m personally a skeptic about just blame & merit as an intrinsic, rather than instrumental, value, but preserving merit is I think one of the motivators for compatibilism in the first place.
These are all pretty serious challenges—I think the doppelgänger scenario is particularly tough. I wouldn’t be surprised if resolving this reduces down to a solution to the “subjunctive dependence” problem for FDT (how do you determine what would be the case if a mathematical function had a different output?), since they feel quite similar to me. Any thoughts, dear readers?
Carlsmith, Joe. “Can You Control the Past?” Accessed June 21, 2025. https://joecarlsmith.com/2021/08/27/can-you-control-the-past#ii-writing-on-whiteboards-light-years-away.
Ibid.
Nice post!
On the doppelganger issue, my intuition is to say that in realistic cases, the doppelgänger’s algorithm would have to be structured as DopplegangerAlgorithm(x) = DoTheOpposite(MyAlgorithm(x)). I.e., we do have a “shared will” in a sense, but the doppleganger is structured so as to do the opposite of what our shared will chooses.