Some businesspeople are successful.  Repeatedly.

Some businesspeople are unsuccessful.  Repeatedly.

Repeated success is often a sign of smarts, especially smarts in forecasting.

Get forecasting right, repeatedly, in business, and you are probably doing well and in demand.

Get forecasting wrong, repeatedly, in business, and you may be out of business completely and forever.

How do we know business forecasting we can rely on?

How can we get better at it?

Here are some insights from people who know way more about this than I do.

Chance[1]

Let’s start with chance.

Borrowing from Nassim Nicholas Taleb[2], consider the following:

If we create a pool of 1,000 stock-brokers chance alone confirms that in Year 1 50% will earn returns above the pool median i.e., 500 brokers will beat the competition.  If we repeat this over five consecutive annual cycles, chance alone predicts that at the end of Year 5 31 brokers will have had returns above the pool’s median for five consecutive years.  Are they smarter than the rest?  Or just lucky?

And of course, the same is true for stockbrokers who failed to beat the median for five years in a row.  Chance alone confirms that in Year 1 50% will earn returns less than the pool median.  If we repeat this over five consecutive annual cycles, chance alone predicts that at the end of Year 5 31 stockbrokers will have earned returns below the pool median for five consecutive years.  Are they more foolish than the rest?  Or were they just unlucky?

The situation gets even more interesting of we extend the thought experiment to stockbrokers who beat the market median four out of the five years i.e., they underperform for one year but beat the median for four out of five years.  Chance alone predicts that total is 156 stockbrokers, or 15.6% of the pool.

If you add these two groups together of successful brokers together, chance alone predicts that 187 stockbrokers will be above the median at least four of five years, or 18.7% of the original group of 1,000.

And chance alone predicts that 187 stockbrokers will be perform below the median at least four of five years, or 18.7% of the original group of 1,000.

What separates the over-performers from the under-performers?  Is it smarts, sagacity[3] or chance?  How do we know if we do not have a base line appreciation of chance to compare it against?

If we are one of the five years in a row on the over performance side, how careful do we need to be about fluffing ourselves up?

Or if we are one of the five years in a row on the under-performance side, how careful do we need to be about beating ourselves up?

Here is another chance worry. 

Using an evenly balanced coin there is a 1 in 1,024 chance of getting ten heads in a row if you toss the coin in one set of ten repetitions.  If you run over a thousand sets of ten tosses in a row you are almost certain of getting ten heads in a row at least once.  However, there is no more certainty it will happen on set 1,024 than it will happen on set 1 or set 563 or set 999 or set 27 or set 3.  While it might be improbable that ten heads in a row would occur on set 1, if you assigned over a thousand people the task of completing 1,024 sets of tossing the coin ten times in a row it will probably happen on set 1 for at least one of them.  Or to put it another way if you tasked over a thousand stockbrokers with the goal of beating the stock market ten years in a row, chance alone suggests one of them will do it.  Perhaps because they were the most brilliant, and perhaps not.

Things we can do to assess the role of chance in target forecasting

  • Construct a well-designed pool that includes our target performers.
  • Construct a well-designed algorithm for assessing chance within the pool.
  • See how well the pool does against our model.
  • See how well our target does against the model.
  • See how well our target does against the pool.
  • Track for multiple repetitions.  The consistency of performance across multiple repetitions can help determine reliability.
  • Track across multiple environments and changing tides that affect the pool.  The consistency of performance across multiple variations can help determine reliability.
  • Track misses as well as hits, successes as will as failures, and things that were overlooked that with hindsight mark a reliable forecaster from the less reliable.
  • Look for trends that affect the pool overall, like a long running bull or bear market, and allow for that in our performance assessment.
  • Never stop tracking.

Experts, knowledge and foresight[4]

Moving on to experts, whether you are relying on one, hiring one, want to become one, or believe you are one, here’s what some research shows at a high level: [5]

  • People generally, including experts, are not that great at judging the likelihood of uncertain events.
  • Pools of specialists can have forecasting ability no better than random or chance.
  • Expert forecasters as a group can be consistently wrong, including relying on the same assumptions, sharing the same tendency to assume that next year will be much like last year, to over-rely on past performance as a predictor of future performance, or to succumb to defensive decision making.
  • Talented generalists often outperform specialists.
  • Properly assembled and managed teams that include a mix of specialists and generalists will outperform pools or teams of experts alone.
  • When we are 90% or more sure that we know what we know we are still wrong 50% of the time i.e., a high degree of subjective certainty is not a high indicator of forecasting accuracy.
  • Cognitive biases colour forecasting for experts as well as the rest of us e.g., confirmation bias, concern for reputation, group influences, and blind-spot bias.
  • Statistical noise[6] is as much a problem for experts as it is for anyone else and can impair professional judgement.

Forecasting[7]

However, the research also supports the following, at a high level:

  • Foresight is a real and measurable skill that can be learned and improved on.
  • Good forecasters are distinguished not by who they are, or their credentials, or what they know, but by what they do, including how they think and how well they think.
  • Good forecasters have high general intelligence, a cognitive style that is more reflective than impulsive, and are actively open-minded.
  • Training can improve forecasting acumen.
  • Teams can outperform individuals.
  • Teams with a mix of specialists and generalists will outperform teams of experts alone.

Getting better at forecasting[8]

Things we can do to get better at forecasting at business include:

  • Learn statistical fundamentals and improve our numeracy.
  • Study cognitive bias, learn our own or our teams’
  • De-bias as much as possible and in real time if we can.
  • Control for group bias or for dominance by or deferral in favour of key team members, including the boss.
  • Practice a lot and keep records.
  • Do forecasting in a team setting where the team is forecasting, not just the boss. 
  • Make frequent precise predictions and measure accuracy with real time feedback.
  • Assign confidence levels to our forecasting, so we are not only assessing the likelihood of a particular outcome but how confident we are in that forecast.
  • Pay particular attention to situations where data, logic, judgement and experience can be important, and do not get bogged down by the things that are extremely certain or very unknowable.
  • Seek out base rates early in the process and use them in forecasting.
  • Use proven checklists, rules of thumb and simple averaging to reduce noise and bias and as part of our forecasting.
  • Be honest in assessing success i.e., how much was due to chance, and how much was the right outcome but for reasons other than what we relied on?
  • Keep an open mind and do not attach ego to our forecasts or the process we use; focus on accuracy.
  • Assess each opportunity we forecast as a member of a reference class and not as a unique case standing on its own.
  • Favour relative judgements and relative scales on a comparative basis with others, and not absolute or stand along judgements.
  • Be capable of working independently initially and later as part of the team to aggregate.
  •  Break the forecasting down into a several independent tasks.
  • Resist premature intuitions; save intuition and judgement to later in the process.
  • Aggregate our forecasts with other independent forecasters.
  • Look for and appreciate the stories the numbers tell.
  • Look for and appreciate the story about the numbers we consider, including who assembled them and why, for what purpose, with what intent and emotions, and with what philosophical values in mind.

The qualities of good forecasters[9]

Studies of forecasting and forecasters suggest the following qualities for good forecasters:

  • Cautious.
  • Humble.
  • Non-deterministic i.e., that are not predisposed to believe that outcomes are determined by inviolate natural laws or that the same conditions (or participants) will always produce the same results.
  • Skilled at finding data.
  • Open-minded.
  • Inquiring.
  • Reflective.
  • Curious.
  • Numerate.
  • Pragmatic.
  • Analytical.
  • Good at synthesizing.
  • Probability-focused.
  • Thoughtful updaters.
  • Willing to change their mind.
  • Conduct comparative analysis including ranking, scales and relative judgements.
  • Open to rules of thumb, averages and other simple forms of aggregation
  • Continuous learners.
  • Committed to improving.
  • Tenacious.
  • Bias aware e.g., confirmation bias, anchoring, etc.
  • Self-aware, especially of their own biases and tendency.
  • Does not rush to blame, find fault or pass judgement.
  • Is as “why” focussed as “what” focussed.
  • Seeks meaning and understanding.
  • Looks for the story behind the numbers.
  • Cares about definitions.
  • Works independently, but able to aggregate multiple independent judgements and in a team setting.
  • Takes an outside view by assessing the problem in the context of a reference class and not as a unique event.

Forecasting in teams[10]

The studies suggest that forecasting is better in a team setting.  So in a business context:

  • Assemble a good forecasting team.
  • Good members are generally intelligent, numerate, statistical, curious, honest, cautious, humble, open-minded, reflective, analytical, respectful of data and alert to cognitive bias in themselves and others (see the qualities noted above).
  • Ensure the team is intellectually diverse including some relevant expertise as well generalists.
  • Beware of dominant voices.
  • Beware of ego.
  • Build trust among team members, including making it acceptable for members to challenge orthodoxy, change their minds or be wrong.
  • Keep information rich, real-time accounts.
  • Allow for frequent feedback.
  • Update and reappraise for forecast at reasonable intervals.
  • Track performance and post-mortem, including accuracy, precision and process.
  • Seek to aggregate independent judgements from diverse worldviews.
  • Implement three phases in team forecasting:
  • Phase 1 – everything is explored from all angles with all data, assumptions and approaches on the table.
  • Phase 2 – available date and assumption are evaluated including sufficient time and trust for productive disagreement.
  • Phase 3 – a converging phase where the team settles on a prediction (probability and confidence).

A concluding thought

There was a time when the quotable comments on forecasting where all quite derisive, from John Kenneth Galbraith to Yogi Berra.  But scholarship has advanced, and the good news is that forecasting is a skill that can be learned and developed especially in teams.  All of us in business have a chance to get better at it, and those of us who do the best should reliably prosper as a result.  And with this in mind, I conclude with a final though attributed to Professor Paul Saffo:[11]

“The goal of forecasting is not to predict the future but to tell you what you need to know to take meaningful action in the present.”


[1] Consider these books by Nassim Nicholas Taleb – Fooled by Randomness: The Hidden Role of Chance in Life and the Markets and The Black Swan: The Impact of the Highly Improbable. Also, Gerd Gigerenzer’s Risk Savvy: How to Make Good Decisions and Calculated Risks: How to Know When Numbers Deceived You.  Or David Spiegelhalter’s The Art of Statistics.

[2] See Taleb, Gigerenzer and Speigelhalter.

[3] from wineverygame.com – “sagacity”

noun

  1. Acutely perceptive judgment which enables good decision making
  2. Intelligent and discriminating use of knowledge

Sagacity refers to incisive wisdom or sharp discernment. It can be applied generally or to any specific area of knowledge. Most often sagacity has something to do with perceiving the nature of a thing or situation, especially as it aids in good decision making. 

[4] Consider the work of Philip E. Tetlcok including Expert Political Judgment: How Good Is It?  How Can We Know? and Superforecasting: The Art and Science of Prediction.  Or Gerd Gigerenzer’s Risk Savvy: How to Make Good Decisions.  See also Noise: A Flaw in Human Judgment by Daniel Kahnemann, Olivier Sinony and Cass R. Sonstein and The Data Detective by Tim Harford.

[5] See immediately preceding footnotes and Paul J. H. Schoemaker and Philip E. Tetlock, Superforecasting: How to Upgrade Your Company’s Judgment, Harvard Business Review, May 2016.  See also James Surowiecki’s The Wisdom of Crowds.

[6] Kahneman, Sinony and Sonstein.

[7] Schoemaker and Tetlock.

[8] Schoemaker and Tetlock.  See also James Surowiecki, and Kahneman, Sinony and Sonstein.

[9] Schoemaker and Tetlock; Kahneman, Sinony and Sonstein; Harford.

[10] Schoemaker and Tetlock, also Kahneman, Sinony and Sonstein.

[11] Check out Paul Saffo’s article Six Rules for Effective Forecasting in the Harvard Business Review (July-August 2007).

© Phil Thompson – www.philthompson.ca