Value of information calculator
Suppose you wanted to do the absolute most good you could. No satisficers here. (Of course, purely hypothetical. No one in these parts would have enough hubris to undertake such a monumental task. Surely.) You wouldn’t just luck into the right activity. It would probably require careful thinking and investigating many big, difficult problems. How do you deal with moral uncertainty? (MacAskill 2014) Which moral theory should you grant the most credence? What are all the possible do-gooding interventions? What are their unintended consequences?
If you insisted upon answering all these questions before acting, you’d almost surely die before you finished. My intuition suggests that’s not the right balance that actually optimizes impact. You should probably act in the world at some point rather than idly noodle until the reaper visits. But when? One possibility is to just pick a semi-arbitrary date on the calendar—to timebox, if you can stomach the business jargon.
Can you do better than this? Can you come up with a more principled way to transition from investigation to action? I contend that the answer is “Yes” and that the tool is value of information calculations.
Expected value of information calculation
We’ll now look at a much simpler domain for expository purposes. Suppose a friend came to you and offered you a dollar if you called their coin flip correctly. As long as they didn’t charge you, it would make sense to agree as you’d expect to win 50 cents on average. Even better would be if you could swap out their coin with your own trusty two-headed coin. Then, you could be certain that you’d make the right call and you’d get the dollar every time. The extra information you get by knowing the outcome has value.
Expected value of perfect information
Slightly less obvious is that you can believe information is valuable to you without being certain exactly what that information is. Suppose you were unable to swap out the flipper’s coin, but a trustworthy friend came to you and whispered, “I know that the flipper uses a two-sided coin. How much will you pay me to tell you whether it’s double heads or double tails?”. After some thinking, I hope you’ll agree that you’d gain by paying up to 50 cents for this information. Without the information you expect to earn 50 cents from the flipper’s bargain. With the information, you expect to earn a dollar. If your friend tells you that it’s a two-headed coin, you can simply bet on heads. If they tell you it’s a two-tailed coin, bet on tails. Either way, you’re guaranteed the dollar. As long as you can react accordingly via your bet, you should be willing to pay for this unknown information. Paying, for example, 20 cents would still leave you ahead because your net gain from the whole transaction would be 80 cents.
Expected value of imperfect information
“But I can’t believe my susurrous friend unreservedly and still hold my head high at the local fallibilist’s meetings!”, you object. Right you are. Even if your friend’s whispers don’t leave you 100% certain about what kind of coin the flipper has, we can still apply the same logic from before. The arithmetic is just a bit more complicated. Ultimately, the conclusion that reducing our uncertainty can have value—even without knowing in which direction the uncertainty will be reduced—remains intact.
But you might not need to understand the details of the calculation laid out in those links because I provide a handy dandy value of information calculator for you here.
The input required by the calculator is a tree in YAML format. The top level of the tree describes the different pieces of information you might find (e.g. the coin is double heads) after investigating and how likely you think each piece of information is. Each of these info nodes has children corresponding to actions you might take (e.g. bet heads). The final level of the tree (the children of the action nodes) describe the outcomes that actually occur and their probability of occurrence given the information received.
So the default tree entered below describes the coin flip scenario described above when you have no clue as to which two-sided coin is in use and you believe your susurrous friend is absolutely reliable.
In addition to showing the expected value of information, the calculator shows the corresponding simplified scenario in which you have no way of gaining information about outcomes at the bottom. For example, it shows you the original coin flip scenario before your susurrous friend comes along. This is offered simply as a point of comparison.
MacAskill, William. 2014. “Normative Uncertainty.” PhD thesis, University of Oxford. http://commonsenseatheism.com/wp-content/uploads/2014/03/MacAskill-Normative-Uncertainty.pdf.