Utopia and an infinitude of secretaries

The Tyranny of the Ideal juxtaposes a unidimensional model of worlds with a multidimensional model. It goes on to suggest that ideal theory is otiose in the unidimensional model. I disagree and attempt to illustrate the disagreement via analogy to the famous secretary problem.

Ideals as superfluous

In (Gaus 2016), the author lays out two conflicting views of political philosophy. The ideal theorists insist on the value of having an ideal society in mind when deciding between possible futures. Their opponents, represented by Amartya Sen, suggest this is a bit silly.

The possibility of having an identifiably perfect alternative does not indicate that it is necessary, or indeed useful, to refer to it in judging the relative merits of two alternatives; for example, we may be willing to accept, with great certainty, that Mount Everest is the tallest mountain in the world, completely unbeatable in terms of stature by any other peak, but that understanding is neither needed, nor particularly helpful, in comparing the peak heights of, say, Mount Kilimanjaro and Mount McKinley. There would be something off in the general belief that a comparison of any two alternatives cannot be sensibly made without a prior identification of a supreme alternative. (Sen 2011)

(The mountain climbing metaphor is popular in discussions of ideal theory.)

Gaus goes on to characterize Sen’s perspective as fundamentally unidimensional. He concludes the discussion with the following, “In this book, then, I shall explore multidimensional ways of thinking about justice, for they provide the most compelling response to Sen’s elegant unidimensional analysis—an analysis that makes the ideal otiose.”


But I, random Internet blogger, claim they are both wrong. Or, at a minimum, very misleading. The ideal serves a role even from the unidimensional perspective.

Implicitly, they are both modeling the unidimensional search for a better world as one across a known set of worlds with a well-order guiding the way. But this assumes too much. Even if we (unrealistically) suppose we can flawlessly evaluate each world or pair of worlds, we do not know the full set of possible worlds. Rather than perfect information, we are in a state of relative ignorance, groping in the dark. Given our ignorance, any information about the distribution of possible worlds (including the maximum—the ideal) is valuable.

Secretary problem

To see that distributional information is valuable even in a unidimensional context, we’ll model ideal theory as a classic unidimensional problem: the secretary problem. In this problem, an employer wants to hire a secretary and starts to interview applicants. After each interview, the employer can decide to continue interviewing or hire the last interviewee and end the process. Their goal is to stop optimally so that they hire the best possible applicant.

The crucial consideration for us is that the employer doesn’t know in advance the quality of the best secretary in the applicant pool. After each interview, the employer must decide if this is as good as it gets or whether to gamble by continuing on. If the employer knew in advance what the best applicant looked like, the problem would be trivial—just keep interviewing until you reach the best applicant.

Better fidelity

Of course, the classic secretary problem isn’t a perfect model of our social engineering problem. But we can extend it so it matches better:

  1. We stipulate that it’s impossible to be without a secretary. There’s always a secretary currently at work who the employer can replace after a successful interview.
  2. There’s no guarantee that an applicant will accept the employer’s offer.
  3. The current set of applicants depends on the current secretary. (Think of it as the prestige of the employer, if that helps.)
  4. There’s always an infinite number of secretaries applying.
  5. The ongoing hiring process is costly.
  6. Interviews aren’t as informative as actually working with a secretary.
  7. The employer can attempt to rehire a former secretary.
  8. A near optimal secretary is nearly as good as an optimal secretary.
  1. We’re always living in some world with all the benefits and costs that entails.
  2. There’s no guarantee that an attempted transition to some alternate world will succeed.
  3. Reachable worlds are subject to feasibility constraints that depend on our current world.
  4. From any given world, there’s an infinite number of possible futures.
  5. We don’t have alternate worlds just presented to us on a platter. We must search and act to make them a real possibility.
  6. Our predictions of the quality of some possible world are imperfect. We learn new things once we actually live in a given world.
  7. We can attempt to return to some previous world.
  8. A nearly ideal world (Nearly ideal in the sense of being of similar quality—not in the sense of being structurally similar to the ideal world.) is nearly as good as an optimal world.
  • 2, 7, and 8 together mean that there’s a real cost to attempted transitions which ought to be weighed against the possible long-term benefits.
  • 5 suggests that we should have some justification for continued search.
  • 6 means that Sen’s naive approach of simply comparing options without context would fall victim to the optimizer’s curse. (Smith and Winkler 2006)

Altogether, our model makes it clear that having some idea as to the best possible outcome is quite valuable. In other words, having an ideal society in mind is useful for political philosophy, even from the unidimensionalist’s perspective (supposing that conceivability is a guide to possibility (Yablo 1993)). It helps us know what is possible and so helps us know when to stop striving.

Let’s be a bit more concrete. Suppose we had no ideal in mind and could only guess as to the quality of available secretaries. It would be quite hard to know when to quit and when to continue the search. Even if we think there are better secretaries out there, we may be wrong. And since we have only a weak sense of the overall distribution, we don’t know how much we have to lose when wrong and how much to gain when right.

Now, suppose instead that we know how good the best possible secretary is. Our decision process becomes easier. When we are far from the ideal, we know that we have much to gain and should be bold. As we near the ideal, we know that we have much to lose and little to gain so we should be cautious.


If your ultimate goal is to climb a very tall mountain, knowing the height of Everest helps you determine when to keep pushing on to new peaks (because it turns out the mountain you’re currently on isn’t very tall when measured against Everest) and when to dig in at the current peak (because it turns out the mountain you’re currently on is almost as tall as Everest).

Gaus, Gerald. 2016. The Tyranny of the Ideal: Justice in a Diverse Society. Princeton University Press.

Sen, Amartya. 2011. The Idea of Justice. Harvard University Press.

Smith, James E, and Robert L Winkler. 2006. “The Optimizer’s Curse: Skepticism and Postdecision Surprise in Decision Analysis.” Management Science 52 (3). INFORMS: 311–22. https://faculty.fuqua.duke.edu/~jes9/bio/The_Optimizers_Curse.pdf.