Discussion about this post

User's avatar
Tyler Seacrest's avatar

Love this post. Here's a musing for you: When people think about determinism, they think about (let's say) my decision to have eggs for breakfast at 8:03am this morning. They might say "Tyler thought he was making the decision at 8:03am, but actually the decision was pre-determined due to mathematics and the trajectories of atoms and things." So when was the decision actually made? "Well, outside of time," they might say. But why is 8:03am *after* the decision was made, if the decision was made completely outside of time? Indeed, maybe I am the mathematics, and in some sense every moment of my life exists outside of time, and indeed even 8:03am is outside of time in a broader sense. Thus the proposition the decision was made by me at 8:03am doesn't contradict what determinists say.

Ali Afroz's avatar

I think it doesn’t make sense to talk about whether something is you as if it was a metaphysical question because at the end of the day, it’s a normative question about how to treat somebody after all you could insist that you five seconds from now is not the same person and there is nothing incorrect about that. Even if it’s very stupid according to normal human priorities. Even if your clone is exactly identical to you to the point of having exactly the same experiences, such that you can’t subjectively distinguish each other, it still makes sense for you to coherently care about your own well-being, but not care about the well-being of your clone, although in that case you know that if you decide to behave this way, so will the clone in which case this is self defeating because it would be better for the both of you if you agree to care about each other. To be clear, this is the counterfactual claim about what would happen. If you did something. There is no need to talk about controversial things like personal identity or whether you can cause your clone to behave in a certain way if that talk is more confusing, than simply pointing out a fact about counterfactual that might actually take place as opposed to being merely in somebody’s head. You know perfectly well, that if you behave a certain way, so will the clone regardless of whether you’re causing it or not, according to your definition of a cause.

Honestly, I think this is part of why it’s so adaptive for people to care about their future self because if they did not neither would their past self for their future self and while people can’t do functional decision theory, it’s simply the case that if you behave in this stupid way, reality will hit you hard and you’ll go extinct real fast if you ever happen to actually come into existence. Of course, this lodge becomes less applicable. The more you differ from your past self. It’s a bit similar to how in group selection between genes. There is an advantage to genes in helping copies spread, but not if the copies are sufficiently different. Yes, I am aware that an agent trying to maximise its utility is not necessarily similar to a gene trying to reproduce, but I think the gene in question is clearly a crude example of such an agent and a good example of how they behave as such pressures, even if there are complicating factors that are unrelated to the functional decision theory reasons.

3 more comments...

No posts

Ready for more?