UPDATE Aug 2nd: changed title to include the term “non-rigid”, since I now believe the proper interpretation of preferences do protect us from wireheading. See:
1. What’s Worth Caring About?
Silas Abrahamson, known primarily for his magnificent mustache and secondarily for Wonder and Aporia, thinks it shouldn’t matter if your life is objectively bad:
There are several ways of my life being bad that we should be careful to distinguish:
I am deceived about how bad my life is by my own preferences. For example, I might at every moment have a slightly negative life (by my own preferences), but whenever I think about the past, I remember it as good. This would mean that my life is actually bad by my own lights, but I just fail to see this.
My life is good by my own preferences, but it is objectively bad. That is, I know what my life is like, and I prefer it being that way. But by some objective criteria it’s bad.
(1) is clearly something I should care about! Or perhaps a better way of putting it: I actually do care about whether my life is bad in the sense of (1)—by definition! … However I don’t think (2) is something I should care about.1
Silas distinguishes the claim that he doesn’t care about #2, which is just stipulated, and that he shouldn’t care about #2, which is an additional normative claim. He also keeps his claim scoped to caring about the goodness of your own life, not about goodness in general:
Imagine someone who, as a matter of fact, doesn’t care about innocent grandmothers being tortured. Clearly they should care, no? Well, I tend to agree on that point, but that’s also broader than what we’re thinking about here. Remember, we’re not talking about what’s good simpliciter, but whether my life is good. I might have vicious or wrong preferences regarding others, but when we’re considering how good my life is, we should consider whether it’s good for me—who else? … What I claim is simply that insofar as a life is good in itself, it is so in virtue of fitting what the person having the life wants. (emphasis mine)2
Yet he also concedes:3
At some point this is just gonna bottom out in some brute appeal to intuition, though.4
So today I’d like to introduce some thought experiments that might change your intuitions. If you have no reason to care about whether your life is bad in ways other than your current preferences, you have no reason to think that the hijacking and reprogramming of your preferences should be undone after the fact. That seems pretty bad.
2. “Objective” Value
One thing I want to clear out of the way: when I’m talking about an objective value, all I mean is a measure of how good Silas’ life is at a point in time that is independent of Silas’ preferences at that point in time. I’m also talking about a measure of how good Silas’ life is in and of itself, not how it serves as a means to an end. If Silas’ evil doppelgänger feels elated and satisfied by throwing shrimp into a meat grinder, then the doppelgänger’s life is bad in the sense that it does harm to other living things, but it is not bad in and of itself—if shrimp didn’t suffer, there’d be nothing wrong with it.
That is all that is required for an “objective” value for Silas’ framework: we just have to show some value by which someone’s life bad independent of their own preferences and independent of some unjust violation of someone else’s preferences.5 I think this is doable even if you are skeptical about stronger versions of “objective” value, like saying that some things are inherently bad independent of their consequences, or that values are given by a divine being.
3. Getting Wireheaded
“Wireheading” refers to rewiring your brain to give you stupendous amounts of pleasure for just sitting around and doing nothing. It is what a very poorly designed AI might do if its goal was a naïve version of “make humans happy:” it’s really hard to supply humans with love, friendship, truth, knowledge, dumb jokes, wonder, aporia, etc., but it’s comparatively easy to stick an electrode in their nucleus accumbens and flood them with dopamine all the time. It’s also what people worry about as the natural endpoint of (some versions of) hedonistic utilitarianism.
Now many of us would rather not be wireheaded, including Silas:
For example, I desire having friends—and I would prefer a life where I had friends to one where I didn’t. Hence I think it’s bad for me if all the people I think are my friends are really engaged in an elaborate prank and secretly hate me—even if I never found out.6
And it would presumably also be bad for him if Silas had no friends but was so high on oxytocin he got all the warm fuzzies of having them while remaining totally alone.
But there are different ways you could be wireheaded. One way is just flooding them with pleasure directly, but another way is to change their preferences such that they get immense pleasure from doing very simple, easily satisfiable things. This is how our poorly designed AI might try to work around a badly-coded constraint that says it should work to satisfy human preferences: just give them different preferences!
Suppose I get hit by a laser in a freak lab accident that scrambles my brain and preferences. Let’s refer to me before the laser as pre-Jack and me after the laser as post-Jack; I’m not weighing in as to whether these are ontologically distinct persons yet.
Pre-Jack has many preferences, like having close friends, writing good philosophy, and telling dumb jokes. Post-Jack only cares about staring blankly at walls, privately writing cruel things about his former loved ones, and watching Despicable Me, all things that Pre-Jack hates. However, post-Jack is therefore easily satisfied and loves the life he’s living.
Now suppose you have the opportunity to reverse the brain-scrambling and restore me to my former self. Should you do it?
I think you absolutely should! My post-scrambling life sounds horrifying. I do not want myself to lead that life even though I would prefer it after the scrambling happened.
However, according to Silas, there is no intrinsic reason why you should unscramble me. Nothing makes my life intrinsically good or bad other than my preferences. Maybe pre-Jack didn’t prefer this life, but if pre-Jack is a different person, that person no longer exists, and if he’s the same person, he’s just changed his mind. Indeed, if you ask post-Jack about whether he wants to revert the transformation, he would be vehemently opposed. It might even be wrong to unscramble me, because it runs counter to post-Jack’s preferences. According to Silas, there is no measure by which I am now leading an intrinsically worse life than I used to. Maybe pre- or post-Jack did more good in the world, or was more honest, or whatever, but Silas must deny that I am now living a worse life than I used to.
I ask you now to seriously think about being modified into having a set of preferences that run totally antithetical to your sense of self. Maybe you will now hate the art you love and embrace the art you hate. Maybe you will give up on pursuing all your dreams. Maybe you will never raise a child or have a spouse. Maybe you will fantasize—privately!—about torturing your loved ones. An appropriate amount of money will be donated to the right charities to exactly compensate for the good you would have done to other people—but no more good will be done than you would have done anyway. Do you really think that life is no worse than the life you lead now?
Now, I will be the first to admit that there is an easy way to get philosophically confused. I, right now, have a strong preference against wireheading, and that preference would hold even if there was no objective reason why wireheading would be bad for me. Perhaps when I introspect and find myself repulsed by the wireheading scenario, all I am locating is that preference, rather than any additional reason.
If that is the case, I leave you with this question: Silas’ article was not about whether there are objective ways in which a life is intrinsically good or bad, but whether or not we should care about such things. It sounds like I have a powerful reason why I should care about authenticity to past versions of myself, even in the event that my preferences change: doing so would protect me from wireheading scenarios. Doesn’t that give me reason enough?
Abrahamsen, Silas. “Should I Care Whether My Life Is Objectively Bad?” Substack newsletter. Wonder and Aporia, July 24, 2025. https://wonderandaporia.substack.com/p/should-i-care-whether-my-life-is.
Ibid.
“Concedes” might sound a little more accusatory than I mean—there’s really nothing wrong with appealing to intuitions as long as you don’t have more powerful defeaters for them.
Ibid.
I say unjust here because there are many preferences people could have about Silas that are very dumb and that don’t make his life any worse. For instance, if I hate art and beauty I could prefer that Silas didn’t have a mustache, but that wouldn’t be a reason why his mustache is bad.
Ibid.
Finally got around to this. I really enjoyed this, and always like getting this kind of engagement!
I suppose my overall response would be that, yes, it would probably be good for you to get wire-headed.
First a small clarification: You say that I would have no measure to say that post-jack is worse off than pre-jack. I wouldn't exactly say that. Rather, the measure is total preferences satisfied/thwarted, weighed by their intensity. So if pre-jack has 10 medium-strength preferences satisfied, and post-jack has 5 satisfied, I would be better off as pre-jack--so we can measure across sets of preferences (assuming that there is a meaningful way of comparing strengths of preferences across pre- and post-jack).
I don't know if you meant what I read you as saying, but I just wanted to make sure we didn't misunderstand each other :)
Anyways, as I said, if you really get more preferences satisfied as post-jack, then I do think it's better for you to get wire-headed than not. Let me give a case that might make this less counterintuitive:
Suppose I hate mushrooms and love liquorice. Liquorice is hard to get ahold off, but mushrooms are easy. That means that I would have more preferences satisfied if I liked mushrooms.
One day I hit my head, making me suddenly have the reverse preferences. Was this good for me? It seems very much to me like it is! I can now satisfy my preferences much more and easily than previously. Prior to hitting my head, I might have some revulsion to the thought of hitting my head like this, as liquorice is so tasty and mushrooms suck--but this is me failing to realize that my preferences would be better satisfied by hitting my head.
Now, we can gradually increase the scope of which preferences are changed until we get to wire-heading. It seems to me that the above should generalize to this.
I think you are correct to locate the instinctive negative reaction to wire-heading at my current preferences. I simply implicitly consider post-jack through the eyes of pre-jack, and from that perspective the life of post-jack looks very bad. However, if I were behind a kind of veil of ignorance about my preferences, it seems like I should and would hope to be born as post-jack over pre-jack.
On top of that, some of the examples you give of what post-jack likes are ethically questionable. You stipulate that they don't result in bad, but still I think it affects how we look at the case. However, how we account for "evil preferences" will plausibly be a question for ethical theory, not for theory of welfare--it really is good for the pedophile to watch child porn (all else being equal), even if we should not want our ethical theory to count this as a good thing. This further defeats the intuition in this case, I think.
Now, maybe I'm sort of stretching the idea of me preferring one life over another: It's not necessarily that if I'm given the choice between being wire-headed or not, I'd necessarily choose to be if it'd be better for me. Rather, the idea is that it would better align with my preferences to be wire-headed than not (where "my preferences" doesn't rigidly designate my current preferences, but non-rigidly designates whatever preferences I'd have in each scenario).
This is similar to how I account for things like drug addictions. It's not that crackhead-silas would choose rehab over a dose of heroin if given the choice. Rather, I'd prefer rehab in the sense that it'd better satisfy my overall preferences--perhaps after my preferences being remoulded through rehab. And for this reason it's better for me. I should thus prefer it, only because I in some sense do prefer it.
This is in contrast to something like objective list theory, where what's best for me might really fit worse with my preferences whatever they turn out to be.
Again, thank you for the thoughtful response!
(P.s. I'm not sure I quite understand the point you make in the very last paragraph).
I feel that if your preferences are altered so drastically, the person that results can’t be considered to be the same individual. Even if you disagree, I don’t think their life is actually going badly in any sense, it’s just that your preferences being altered was very bad from the point of view of your past self and it’s bad incentives to let people modify your motivational system and then use is an excuse to not change you back. This doesn’t require you regard the altered version as a different person only that you agree that your past and future self can have conflicting preferences and both these preferences carry some moral weight. Certainly, if somebody used a machine to create an exact copy of what you would be like if your motivational sister was so altered, then this created individual would not have a life that was objectively going bad in any sense. The only way your example is different is because your past self is altered so it’s not just a creation of someone with strange preferences but also the deletion of a lot of your beloved preferences.