On casual futurism

Predicting the future is hard. Research on forecasting techniques is inconclusive, but scenario planning might be the least bad approach.

The human race, to which so many of my readers belong, has been playing at children’s games from the beginning…. And one of the games to which it is most attached is called “Keep to-morrow dark,” …. The players listen very carefully and respectfully to all that the clever men have to say about what is to happen in the next generation. The players then wait until all the clever men are dead, and bury them nicely. They then go and do something else. That is all. For a race of simple tastes, however, it is great fun. (Chesterton 1904)

“How likely are we to have holophonors by 3002?”

“Pretty likely. They’re pretty much just better oboes, right? 80% chance?

Keep in mind the overconfidence bias.”

“Alright, 70% chance.”

“Also, note that people are pretty bad at predictions. A study by George Wise found that out of 1556 naive medium-term predictions made publicly by Americans between 1890 and 1940, just under 40% had been fulfilled or were in progress by 19761 (Wise 1976).”

“Fine. Then I won’t just make a reflexive prediction. I’ll give the matter serious thought… Well, people in the future will probably be really into opera and the arts because shiny, metal robots will do all the real work. So they’ll be sitting on their hover chairs in their spandex togas. And they’ll want to listen to something—but see something at the same time—just like laser light shows. But lasers will be pretty blasé in the future (what with the ubiquitous laser pocket knifes, laser watches, and laser pointers). So holophonors will be the perfect thing. So, I guess a 90% chance of holophonors?

“Ah ha ha. You fell right into my trap! Your casual futurism betrays you! Just by imagining that scenario, you think it’s more likely.”

“Says who?”

“John Carroll, that’s who. He asked some college students to imagine themselves on election day for the 1976 U.S. presidential election (Carroll 1978). Some were told to imagine Carter winning. They were significantly more certain that Carter would win than those that didn’t imagine anything.

“Well, maybe it was a persuasively coherent reverie. My holophonor scenario was pretty airtight. That doesn’t sound so bad to be convinced by an exceptionally plausible scenarios…. Wait, what happened with those told to imagine Ford winning?”

“I think you can guess. They became more certain that Ford would win. He got pretty similar results when he asked students to predict the success of the University of Pittsburgh’s football team after having some of the students imagine a good season and some imagine a bad season. Carroll concluded, ‘The objective fact that some events are imaginary, hypothetical, inferred rather than observed … is poorly coded or not properly used. Thus, the act of posing a problem or asking a question could itself change the beliefs of subjects.’”

“It could be worse. I could believe in something absurd—like a future with no holophonors. Out of all possible scenarios, I described the most plausible.”

“Nope. People’s default, ‘realistic’ predictions are pretty much just their most optimistic predictions (Buehler, Griffin, and Ross 1994) (Newby-Clark et al. 2000).”

“But how common is this problem? It seems like you just tricked me into a vivid visualization.”

“Not so much. Constructing details and filling in gaps is an almost inevitable part of any serious prediction effort (Griffin, Dunning, and Ross 1990).”

“Alright, you smug snake. I give up. I can’t just reflexively shout out a number and trying to think about the prediction in detail only makes matters worse. What should I do then? How do I see into the future?”

“To be honest, I’m not sure. Since naive methods seem to fare so dismally, we should probably use some sort of system. Unfortunately, there’s not a lot of empirical evidence on effective forecasting techniques.”

“Presumably we want to minimize all these biases, right?”

“Yeah. There a lot of techniques to choose from though (Group 2004). Since we’re just a couple of schlemiels, we can’t really call up a panel of experts for the Delphi method. And for a lot of the mathematical models, ‘even the relevant variables are not known, let alone the linkages between the variables.’ (Martino 2003) If we feel we must forecast, the best general-purpose technique might be scenario planning.”

“I thought you just got done scolding me for scenarios!”

“I did. But the distinguishing feature of scenario planning (a term of art) is a semi-rigorized approach to generating fundamentally divergent, coherent narratives of the future. Proponents suggest the consideration of multiple scenarios is salubrious.”

“Are they right?”

“Maybe. As I lamented already, evidence is sparse. And a lot of that evidence relies on self-report about the decision process. ‘Since [decision] outcome is often difficult to evaluate the … process perspective has become the major stream of research on decision quality.’ (Meissner and Wulf 2013)

“That sounds problematic, since your major contention is the dominating role of bias.”

“Exactly. But the outcome evidence that exists does seem to suggest that scenario planning is a bit better than naive methods. Whether it’s actually satisfactory…”

“Good news first. What’s the evidence in favor of scenario planning?”

“The most direct evidence of outcome efficacy comes from three researchers at the University of Surrey (Phelps, Chan, and Kapsalis 2001). They performed an observational study of information technology companies in the UK. After sampling, they looked at 50 companies using scenario planning and 50 that didn’t. The companies using scenario planning showed significantly greater growth in profits and return on capital employed, though they did not show significantly greater growth in clients. They did a similar study with 22 water companies. Here, there was no significant relationship between scenario planning and the performance variables.”

“Hm. What’s the other ‘favorable’ evidence?”

“In a repeated measures study, researchers found that scenario planning widened 50% and 90% confidence intervals on personally important strategic measures by 56% and 44% respectively (Schoemaker 1993). However, they also found that when asked to construct extreme scenarios (judged as implausible), ranges actually contracted.

“So whether scenario planning increases or decreases confidence intervals depends on which scenarios are constructed.”

“Yep. In an experimental study of graduate management students planning for a case company, students who went through a full scenario planning process showed no evidence of the framing bias (Meissner and Wulf 2013). Students that went through only the initial part of the scenario process, but didn’t actually generate scenarios, still showed susceptibility to the framing bias. However, students that used traditional strategic planning tools (like SWOT and Porter’s five forces) were also effectively debiased. Other researchers even suggest that directions akin to ‘think harder’ are sufficient to defeat the framing bias (Wright and Goodwin 2002).”

“What else?”

“That’s pretty much all I could muster in favor of scenario planning. Of pretty fundamental concern for scenario planning is evidence that generating multiple scenarios doesn’t alter point predictions. Researchers did a study in which they asked university students to estimate when they’d complete school assignments and then followed-up to determine the actual completion times (Newby-Clark et al. 2000). Through a variety of experimental permutations, they concluded that, ‘Participants’ final task completion time estimates were not affected when they generated pessimistic scenarios … in combination with more optimistic scenarios. … [R]egardless of plausibility, predictors did not attend to pessimistic scenarios.’ Similar results were found by Paul Schoemaker (Schoemaker and Heijden 1992).”

“Any more?”

“Yes. A quasi-experimental study of managers found that scenario planning decreased rational decision-making and increased intuitive decision-making as measured by the General Decision-Making Style Survey (Chermack and Nimon 2008).”

“That sounds like it could be conducive to biases.”

“Yeah. Ronald Bradfield offers a pretty harsh indictment of scenario planning (Bradfield 2008). He observed five groups of five or six postgraduate students developing scenarios for a designated organization. He observed that, for each group, their starting point determined which factors were subsequently explored in scenarios and this starting point was essentially determined by events highly publicized in the media like avian influenza and stem cell research. When countervailing evidence was introduced or alternate developments were suggested, the groups generally discarded them, returning to a ‘common … midpoint of events that were expected to occur’. Bradfield concluded that ‘there was no evidence of the so-called out of the box thinking in the scenarios and there were no strategic insights as to how the future might evolve in new and unprecedented ways’.”

“Is that everything?”

“That’s pretty much all the useful information I could find.”

“So where does that leave us?”

“I’m not totally sure. Schoemaker concluded that, ‘Scenarios thus exploit one set of biases (such as the conjunction fallacy and intransitivities of beliefs) to overcome another set, namely overconfidence, anchoring and availability biases.’ (Schoemaker 1993). Ultimately, scenario planning may be one of the less bad prediction methods.”

Balzer, William K, Lorne M Sulsky, Leslie B Hammer, and Kenneth E Sumner. 1992. “Task Information, Cognitive Information, or Functional Validity Information: Which Components of Cognitive Feedback Affect Performance?” Organizational Behavior and Human Decision Processes.

Bradfield, Ronald M. 2008. “Cognitive Barriers in the Scenario Development Process.” Advances in Developing Human Resources. http://www.researchgate.net/publication/249631408_Cognitive_Barriers_in_the_Scenario_Development_Process.

Buehler, Roger, Dale Griffin, and Michael Ross. 1994. “Exploring the" Planning Fallacy": Why People Underestimate Their Task Completion Times.” Journal of Personality and Social Psychology. http://homepages.se.edu/cvonbergen/files/2013/01/Exploring-the-Planning-Fallacy_Why-People-Underestimate-Their-Task-Completion-Times.pdf.

Carroll, John S. 1978. “The Effect of Imagining an Event on Expectations for the Event: An Interpretation in Terms of the Availability Heuristic.” Journal of Experimental Social Psychology.

Chermack, Thomas J, and Kim Nimon. 2008. “The Effects of Scenario Planning on Participant Decision-Making Style.” Human Resource Development Quarterly. http://www.thomaschermack.com/Thomas_Chermack_-_Scenario_Planning/Research_files/Chermack(2008)EffectsofSPonDM.pdf.

Chesterton, G. K. 1904. The Napoleon of Notting Hill. The Bodley Head. http://www.gutenberg.org/ebooks/20058.

Griffin, Dale W, David Dunning, and Lee Ross. 1990. “The Role of Construal Processes in Overconfident Predictions About the Self and Others.” Journal of Personality and Social Psychology. http://psych.colorado.edu/~vanboven/teaching/p7536_heurbias/p7536_readings/griffen_dunning_ross_1990.pdf.

Group, Technology Futures Analysis Methods Working. 2004. “Technology Futures Analysis: Toward Integration of the Field and New Methods.” Technological Forecasting and Social Change. https://noppa.aalto.fi/noppa/kurssi/phys-c1380/luennot/PHYS-C1380_technology_futures_analysis.pdf.

Martino, Joseph P. 2003. “A Review of Selected Recent Advances in Technological Forecasting.” Technological Forecasting and Social Change.

Meissner, Philip, and Torsten Wulf. 2013. “Cognitive Benefits of Scenario Planning: Its Impact on Biases and Decision Quality.” Technological Forecasting and Social Change.

Newby-Clark, Ian R, Michael Ross, Roger Buehler, Derek J Koehler, and Dale Griffin. 2000. “People Focus on Optimistic Scenarios and Disregard Pessimistic Scenarios While Predicting Task Completion Times.” Journal of Experimental Psychology: Applied.

Phelps, R, C Chan, and SC Kapsalis. 2001. “Does Scenario Planning Affect Performance? Two Exploratory Studies.” Journal of Business Research.

Schmitt, Neal, Bryan W Coyle, and Larry King. 1976. “Feedback and Task Predictability as Determinants of Performance in Multiple Cue Probability Learning Tasks.” Organizational Behavior and Human Performance.

Schoemaker, Paul JH. 1993. “Multiple Scenario Development: Its Conceptual and Behavioral Foundation.” Strategic Management Journal.

Schoemaker, Paul JH, and Cornelius AJM van der Heijden. 1992. “Integrating Scenarios into Strategic Planning at Royal Dutch/Shell.” Strategy & Leadership.

Wise, George. 1976. “The Accuracy of Technological Forecasts, 1890-1940.” Futures.

Wright, George, and Paul Goodwin. 2002. “Eliminating a Framing Bias by Using Simple Instructions to ’Think Harder’ and Respondents with Managerial Experience: Comment on ‘Breaking the Frame’.” Strategic Management Journal.

  1. I expected to find that one of the major difficulties in making forecasts is the limited opportunity for confirmation or rejection, due to the timespans involved. I was quite surprised to find that, at least for some experimental tasks, this sort of feedback makes predictions worse (Balzer et al. 1992) (Schmitt, Coyle, and King 1976).↩︎