Ideal theory in the shadow realm

It’s opposite day! Instead of talking about the ideal, we’re going to talk about the anti-ideal—the worst of all possible worlds. I contend that, if ideal theory is useful, anti-ideal theory is also useful.

Last time, we covered two roles for ideal theory—ideal as destination and ideal as calibration. We’ll examine the anti-ideal from each perspective.

Ideal as destination

To recapitulate, this line of thinking claims the ideal is useful because it provides a long-term goal and something to work toward. Symmetrically, the anti-ideal is useful because it provides a long-term anti-goal and something to avoid. We operationalize this as seeking to minimize the distance between our current world and the ideal and maximize the distance between our world and the anti-ideal.

This is where the symmetry breaks down. For most reasonable metrics, there is only one world with a minimum distance to the ideal—namely, the ideal itself. Depending on what we believe about the set of possible worlds, there might be none, one or many points which are at a maximum distance from the anti-ideal.

A physical analogy may help your intuition. If the ideal is the attractive pole of a magnet, drawing us toward it, the anti-ideal is the repulsive pole of a magnet. If we place an iron ball bearing in a ring surrounding the repulsive pole, any position on the edge of the ring is (subject to the constraints imposed by the ring) at a maximum distance from the repulsive pole.

So the anti-ideal isn’t as useful as the ideal in terms of orientation but it still is useful. Some illustrative scenarios:

  • If there are two viable routes to the ideal, but one passes perilously close to the anti-ideal and one doesn’t, we should take the latter. (We suppose that our social engineering skills are imperfect so that skirting the anti-ideal entails some non-zero risk of accidentally falling into it.)
  • If there’s not one unique maximum value (ideal) but several, we might well prefer to break the tie by picking the one farthest from the anti-ideal. Again, this minimizes the risk of catastrophe.

Ideal as calibration

I think the argument for the anti-ideal is more compelling here. Last time, we said that given our imperfect knowledge of possible worlds and of justice, we can treat the social engineering problem of finding the best problem as a problem of statistical inference. When inferring the true distribution of possible worlds, a maximum (the ideal) is useful data and can shift our estimates substantially. Symmetrically, the minimum (the anti-ideal) also helps us better understand the distribution of possible worlds.

Knowing only our own neighborhood of possible worlds as determined by those we’ve experienced is a very limited perspective. Envisioning the ideal helps us understand what we have to gain through change; envisioning the anti-ideal helps us understand what we have to lose.

Even more simply: More information about the distribution is better.