Preference utilitarianism—psychological or metaphysical? II

Preliminary implications

There are at least two possible interpretations of preference utilitarianism: psychological preference utilitarianism (the morally important part of preference satisfaction is when the preferrer believes it to be satisfied) and metaphysical preference utilitarianism (the morally important part of preference satisfaction is when the world comes into accord with the preference). Each has strange implications. PPU favors deception and gives up intuitive supervenience. MPU requires us to pick us some demarcation criterion; the broadest possible demarcation criterion is a bad candidate.

Intro

Last time we highlighted that there are actual several possible interpretations of preference utilitarianism. These depend on when you get to a recognize a preference as having been satisfied in your moral accounting: When the world changes to accord with your preference? When you believe the world to have changed? When you have a justified true belief that the world has changed?

In this post, we’ll draw out some implications of these views which will also serve as preliminary arguments for and against. For simplicity’s sake, we’ll focus on the two poles: preferences are satisfied when the preferrer believes the world to have changed (purely psychological), and preferences are satisfied when the world has changed (purely metaphysical).

(I’ll also note that I assume advocates of preference utilitarianism implicitly (or explicitly and silently?) believe the psychological variant and my intuition favors it. If I argue against that position more vehemently, it’s only a sign of esteem.)

Psychological

Deception

One obvious but unfortunate implication of the psychological point of view is that self-deception is A-OK. Obligatory even. If I’d like to be create a grand unified theory of physics, I may find it easier to set myself a bastardized version of the problem than to solve the real thing. And if I can deceive myself in this way, the psychological theory of preference utilitarianism (henceforth PPU) makes a prima facie case that I am morally obliged to. (Because this deceptive approach is easier, it means I can satisfy this preference quickly and move on the satisfaction of other preferences rather than sinking vast gobs of time into the ‘authentic’ method.)

But it’s not just self-deception. It’s any and all deception. Suppose I want to be loved by my family. If I’m a bit of git, my family might judge that just gaslighting me into believing I already am is easier and more likely to succeed than going through the hard personal and interpersonal work of improvement. Again, they may even be morally obliged to gaslight me. Similarly, depending on circumstances, a polity might be morally obliged to wage an effective propaganda campaign denying the presence of poverty rather than actually solving it. We’ve reached peak Orwell already.

One reasonable response is that these deceptions are short-sighted. Satisfying these ‘gateway’ preferences might unlock new and better possibilities that are unavailable to the deceived. An actual grand unified theory of physics might allow things heretofore considered impossible.

But not all preferences are like this. Some are truly terminal. No further beliefs or actions are contingent upon their fulfillment in the world and so there is nothing that pragmatically militates for their accuracy. Being loved might be like this. I struggle to think of any further beliefs or actions that are only available to the truly loved and unavailable to those who are surrounded by committed gaslighters.

Finally, I’d rest uneasy if my only defense against being morally obliged to deceive was that it was sometimes pragmatically unwise. If a thing seems bad, it’s nice to have a principled way of avoiding it rather than hoping that contingent fact works in our favor.

Supervenience lost

At the end of the hazing, Broseph’s frat brother came back into the room and gave the CCTV a good ol’ Fonze thump. The colors shifted back and reflected the fact of the matter. “Bro, We got you good!. The TV had distorted colors. The M&Ms actually started out mixed and he was sorting them into separate piles for each color. Psych!”

With Broseph’s new beliefs about what happened, his evaluation of the morality of the original M&M shuffling has flipped. Furthermore, if you’d asked Broseph about returning the M&Ms to their original position before the revelation, he’d have correctly said it was good (he believed it satisfied his preferences). Just after the revelation, he would also be correct in claiming that it was bad (he believed it frustrated his preferences). So under PPU, a single action can correctly be said to both right and wrong. That’s worrying because supervenience isn’t something to give up lightly.

The only way to retain supervenience is to contextualize each action in light of observers’ beliefs. For example, “Returning the M&Ms to their original position is good when Broseph believes the CCTV and thus believes all the M&Ms are the same color.” and “Returning to the M&Ms to their original position is good when Broseph doubts the CCTV and thus believes all the resulting piles are different colors.”

This works, but it’s contrary to the plain language used when people make moral evaluations. If we want PPU and supervenience, we must interpret statements like “Action A is right.” as having a silent contextualizer: “I believe action A in the context of belief B is right.” This is a bit weird.

Anthropic effects

PPU also seems to imply that surprise, secret murders aren’t wrong. Even though we imagine most people have a preference for life, if they’re dead, there is no preferrer that’s frustrated. But this issue applies to any embodied ethic including, for example, hedonic utilitarianism so we won’t belabor it here.

Metaphysical

Scope

One benefit of PPU is that it ‘screens out’ the vast majority of preferrers. The preferences an actor must account for are precisely those of agents that will come to know about the action. On the other hand, metaphysical preference utilitarianism (henceforth MPU) commits us to caring about all preferences.

Suppose the frat brothers had cut the CCTV cable so Broseph was just sitting in an empty room staring at a blank TV. MPU implies the masked brother is still morally obliged to sort the M&Ms—it brings the world into accord with Broseph’s preferences which is good even if Broseph doesn’t know about it.

On the other hand, there are potentially many other humans on Earth with preferences about M&M sorting. The masked brother ought to try to satisfy these as well. Satisfying these preferences with almost no information is quixotic, but I’ve typically found a “Just do your best!” reply to concerns about tractability issues in utilitarianism pretty satisfying.

Even stranger is the possibility that the masked brother might be obliged to account for preferences outside of his light cone. If an alien that cares about M&M sorting lives on the edge of the universe such that the alien is outside of our light cone (i.e. we can never have any causal influence on it), the most obvious versions of MPU suggest that the masked brother should still account for these preferences.

And once you’ve opened the door to honoring preferrers who are causally isolated from the actor, it seems that you ought to honor the preferences of both past (i.e. the deceased) and future (i.e. the unborn) preferrers. This is both weird and a bit convenient because it provides a way to talk about intergenerational justice.

Path dependence

If we accept the implication that past preferences have moral import for current actions, we’ve introduced path dependence. The same action can be correctly subject to different moral evaluations depending on past preferences. For example, if we’re in time 2 and indifferent between actions A and B, but someone in time 1 preferred action A, we ought to do A. On the other hand, if the person in time 1 preferred action B, we ought to do that. So our moral obligations in time 2 differ even if the preferrer in time 1 is now deceased and these two counterfactual worlds are identical at time 2. We’ve lost supervenience again.

Mistaken preferences

Another bizarre consequence of honoring past preferences is that we might be obliged to honor our own past preferences that we now regard as mistaken. If our preferences are time-weighted (i.e. the longer a preference is held, the more weight it’s granted in our moral calculus), naive MPU suggests I’m morally obliged to honor an old, long-standing preference contrary to my current preferences until I’ve held the current preference long enough. That is, if I preferred action A to B for times 1-10, and start preferring action B at time 11, it is only morally permissible to undertake B from time 21 on.

Subconclusion

These concerns all suggest that we must be more selective in which preferences we regard as morally important—all preferences everywhere and everywhen quickly becomes baffling. It’s not enough to say that a preference is satisfied when the world comes into accord with it. There must be some further relationship between the preference and the world. But what is this demarcation criterion that doesn’t rely on psychology? Light cones?

Conclusion

We’ve examined two possible interpretations of preference utilitarianism: psychological preference utilitarianism (the morally important part of preference satisfaction is when the preferrer believes it to be satisfied) and metaphysical preference utilitarianism (the morally important part of preference satisfaction is when the world comes into accord with the preference). Each has strange implications. PPU favors deception and gives up intuitive supervenience. MPU requires us to pick us some alternative, non-obvious demarcation criterion; the broadest possible demarcation criterion is a bad candidate.

Maybe the best solution is knowledge preference utilitarianism? Rather than the preferrer’s belief being the crucial ingredient, it’s knowledge—justified true belief.