Superforecasting Book Summary
The Art and Science of Prediction
Book by Philip Tetlock
Summary
In Superforecasting, Philip Tetlock and Dan Gardner reveal the techniques used by elite forecasters to predict future events with remarkable accuracy, and show how these skills can be cultivated by anyone to make better decisions in an uncertain world.
Sign in to rate
Average Rating: 1
The Skeptic And The Optimist
Philip Tetlock considers himself an "optimistic skeptic" when it comes to forecasting. The skeptical side recognizes the huge challenges of predicting the future in a complex, nonlinear world. Even small unpredictable events, like the self-immolation of a Tunisian fruit vendor, can have cascading consequences no one foresaw, like the Arab Spring uprisings.
However, the optimistic side believes foresight is possible, to some degree, in some circumstances. We make mundane forecasts constantly in everyday life. Sophisticated forecasts underpin things like insurance and inventory management. The key is to figure out what makes forecasts more or less accurate, by gathering many forecasts, measuring accuracy, and rigorously analyzing results. This is rarely done today - but it can be.
Section: 1, Chapter: 1
"We Are All Forecasters"
"We are all forecasters. When we think about changing jobs, getting married, buying a home, making an investment, launching a product, or retiring, we decide based on how we expect the future will unfold. These expectations are forecasts."
Section: 1, Chapter: 1
Even Smart, Accomplished People Make Simple Forecasting Errors
In 1956, the respected physician Archie Cochrane was diagnosed with terminal cancer. An eminent specialist said Cochrane's axilla was "full of cancerous tissue" and he likely didn't have long to live. Cochrane immediately accepted this and started planning for death.
However, a pathologist later found no cancer in the tissue that was removed. The specialist was completely wrong. Being intelligent and accomplished was no protection against overconfidence.
Even more striking, Cochrane himself made this mistake, despite being a pioneer of evidence-based medicine. He railed against the "God complex" of physicians who relied on intuition rather than rigorous testing. Yet he blindly accepted the specialist's judgment.
Section: 1, Chapter: 2
WYSIATI Explains Why We Jump To Conclusions
WYSIATI (What You See Is All There Is) is a key mental trap that leads to flawed predictions. It refers to our mind's tendency to draw firm conclusions from whatever limited information is available, rather than recognizing the information we don't have.
For example, after the 2011 Norway terrorist attacks, many people immediately assumed Islamist terrorists were responsible, based on recent events like 9/11 and the bits of evidence available, like the scale of the attacks. However, the perpetrator turned out to be a right-wing anti-Muslim extremist, Anders Breivik.
WYSIATI explains why we jump to conclusions rather than saying "I don't know" or "I need more information." Our minds abhor uncertainty. We impose coherent narratives on events, even when key facts are missing. Breaking this habit is crucial to forecasting better.
Section: 1, Chapter: 2
Beliefs Are Hypotheses To Be Tested, Not Treasures To Be Guarded
Superforecasters treat their beliefs as tentative hypotheses to be tested, rather than sacred possessions to be guarded. This is encapsulated in the idea of "actively open-minded thinking."
Some key tenets of actively open-minded thinking:
- Be willing to change your mind when presented with new evidence
- Actively seek out information that challenges your views
- Embrace uncertainty and complexity; don't be afraid to say "maybe"
- View problems from multiple perspectives; don't get wedded to one narrative
- Resist the urge to simplify and impose falsely tidy stories on reality
- Expect your beliefs to shift over time as you learn and discover your mistakes
By holding beliefs lightly, and being eager to stress-test and refine them, we can gradually move closer to the truth. Superforecasters show that this approach produces vastly better predictions compared to stubborn, overconfident ideologues.
Section: 1, Chapter: 2
The Value Of Precise Forecasts
Vague language like "a serious possibility" or "a non-negligible chance" makes it impossible to assess whether a forecast was accurate or not. In contrast, precise probabilities, like "a 62% chance", allow predictions to be unambiguously judged. Precision is necessary for forecasts to be properly tested, tracked and improved. Some key principles:
- Replace vague language with numerical odds as much as possible
- Use finely grained percentage scales (30%, 31%, 32%) rather than coarse buckets (certain, likely, toss-up, etc.)
- Specify clear time horizons and definitions for all forecast questions
- Track predictions and grade them against what actually happened
- Calculate forecasters' accuracy using quantitative measures like Brier scores
Precision takes more mental effort. But embracing it is necessary to separate lucky guesses from true skill - and to refine that skill with practice and feedback.
Section: 1, Chapter: 3
This concept is also discussed in:
The Signal and the Noise
Why It's Hard To Assess Forecast Accuracy
Steve Ballmer's infamous 2007 forecast that "There's no chance that the iPhone is going to get any significant market share" looks hugely wrong in hindsight. But Ballmer never specified what "significant" market share meant, or what time period he was referring to. His forecast was too vague to definitely judge as right or wrong.
This is extremely common - and makes it effectively impossible to assess forecast accuracy. To be testable, forecasts need:
- Specific definitions. What counts as a "default" or a "bubble" or a "coup"?
- Precise time horizons. By what date will the event happen or not?
- Numerical probabilities that can be scored. "60% chance" can be graded later as right or wrong; "pretty likely" cannot.
- Repeated forecasts over time. One forecast is not enough - we need a track record.
Most real-world forecasts fail these criteria. As a result, we have little idea how accurate experts actually are, despite how much influence their predictions have.
Section: 1, Chapter: 3
Foxy Forecasters Beat Hedgehog Historians
In his famous essay "The Hedgehog and the Fox," Isaiah Berlin argued that thinkers can be classified into two categories: Hedgehogs, who view the world through the lens of a single defining idea, and Foxes, who draw on a wide variety of experiences and perspectives.
Forecasters who were Hedgehogs - with one big theoretical view of how the world works - tended to perform quite poorly. They were overconfident and reluctant to change their minds. Foxy forecasters were much more accurate. Rather than trying to cram complex reality into a single framework, they were comfortable with cognitive dissonance and pragmatically adapted their views based on new information. Some key Fox behaviors:
- Pursuing breadth rather than depth, gathering information from diverse sources
- Aggregating many micro-theories rather than trying to build one grand theory
- Frequently using qualifying words like "however" and "on the other hand"
- Readily admitting mistakes and changing their minds
- Expressing degrees of uncertainty, rather than certainty
The Hedgehog/Fox distinction points to a crucial insight: In a complex, rapidly changing world, cognitive flexibility is more valuable than theoretical elegance. The nimble fox prevails over the stubborn hedgehog.
Section: 1, Chapter: 3
Superforecasters Beat The Wisdom Of The Crowd By 60%
The Good Judgment Project (GJP), led by Philip Tetlock and Barbara Mellers, recruited thousands of volunteer forecasters to predict global events as part of a tournament sponsored by the research agency IARPA. Questions covered politics, economics, national security and other topics relevant to intelligence analysts.
The GJP used multiple methods to boost forecast accuracy, including training, teaming, and statistical aggregation. But its most striking finding was that a small group of forecasters, the "superforecasters", consistently outperformed others by huge margins.
Across the first 2 years of the tournament, superforecasters beat the "wisdom of the crowd" (the average forecast of all participants) by 60% - a stunning margin. They even outperformed professional intelligence analysts with access to classified data. This suggests that generating excellent prediction accuracy doesn't require subject matter expertise or insider information - just the right cognitive skills and habits.
Section: 1, Chapter: 4
Superforecasters Come From Diverse Backgrounds
Who are the superforecasters? They are a diverse group - engineers, lawyers, artists, scientists, Wall Streeters, and more. Many have graduate degrees, but some don't. They include a filmmaker, a mathematician, a pharmacist, and a retiree "looking to keep his mind active."
What they have in common is not so much who they are, but how they think. Superforecasters score highly on measures of fluid intelligence and actively open-minded thinking. They are numerate and capable of rapidly synthesizing information. But more important than raw intelligence is their cognitive style - they are actively open-minded, intellectually humble, eager to learn from their mistakes.
The superforecasters show that foresight isn't an innate gift, but a product of a certain way of thinking. And that way of thinking can be taught and cultivated - it doesn't require an elite background or PhD. It's an accessible skill.
Section: 1, Chapter: 4
Forecasting Doesn't Require Powerful Computers Or Arcane Math
Many superforecasters have backgrounds in STEM fields and are highly numerate. They are comfortable with concepts like Bayes' theorem for updating probabilities based on new information. Some even build their own computer models.
But advanced math is not essential. Most superforecasters say they rarely use quantitative models or crunch numbers. Instead, their forecasting mainly relies on thorough research, careful reasoning, and sound judgment.
For Lionel Levine, a math professor, not using quantitative methods is a point of pride. He wants to prove he can be a great forecaster without relying on his mathematical toolkit: "It's all, you know, balancing, finding relevant information and deciding how relevant is this really? How much should it really affect my forecast?"
The key skills of balancing inside vs outside views, synthesizing perspectives, granular distinctions, and continual updating are accessible to anyone.
Section: 1, Chapter: 5
Break Big Problems Down Using Fermi Estimates
To make impossibly complex problems tractable, superforecasters often use "Fermi-style" analysis, named after the physicist Enrico Fermi. The steps:
- Clearly specify the thing you want to predict (e.g. "How many piano tuners are there in Chicago?")
- Break the problem down into smaller, easier parts. ("How many pianos are there in Chicago? How often are they tuned each year? How many can one tuner service per year?")
- Make a reasonable guess for each component, based on whatever information you have or can gather. Focus on quantities you can approximate, even if crudely.
- Combine your component estimates into an overall estimate, using simple math (e.g. # of pianos * # of tunings per piano per year / # of tunings per tuner per year = # of tuners)
The resulting estimate won't be exact, but it's often surprisingly close - and much better than a wild guess. By breaking big mysteries down into small, knowable parts, Fermi estimates make unknowns more manageable.
Section: 1, Chapter: 5
The Outside View Keeps Forecasters Grounded
An essential habit of superforecasters is to take the "outside view" first. This means considering a problem as an instance of a broader class, and using that class as a starting point. If you're forecasting the success of a particular startup, the outside view means first looking at the base rate of success for all startups first. If 90% of startups fail within 5 years, the outside view says there's a 90% chance this one will fail too.
Only after anchoring with the outside view do superforecasters take the "inside view" by analyzing the details of the case. If those details are exceptional, they shift the probability up or down from the base rate. But not by much - they know the outside view is usually a better guide than our internal narrative.
The outside view keeps us grounded. It prevents us from being swayed by compelling stories and overconfidently thinking "this time is different." Kahneman calls it "the single most important piece of advice regarding how to increase accuracy in forecasting."
Section: 1, Chapter: 5
The Tip-Of-Your-Nose Perspective Is A Treacherous Guide
The "tip-of-your-nose" perspective is how we intuitively perceive the world. It refers to both
- the subjective vantage point we each have on reality, and
- the tendency to treat our personal, close-up view as the truth, even when it's distorted or missing key facts.
For example, after 9/11, many Americans felt intensely anxious about terrorism and assumed more major attacks were imminent and inevitable. The tip-of-your-nose view made it feel that way. But taking an "outside view" by comparing the 9/11 death toll to other risks like heart disease, Americans' risk of dying in a terror attack was so low it was hardly worth worrying about.
Superforecasters know the tip-of-your-nose view is frequently misleading. It may "feel right" that a company is doomed to fail or that a war is unwinnable. But feelings are not a reliable guide to reality. Only by stepping outside ourselves and stress-testing our views against data can we avoid being misled.
Section: 1, Chapter: 5
The "Dragonfly Eye" Approach To Integrating Views
Effective forecasting requires synthesizing many perspectives to create a unified whole. Superforecasters use a "dragonfly eye" approach, named after the insect:
- Consider the problem from multiple angles, like the dragonfly's 30,000 lenses capturing different views
- Explicitly list reasons for and against a particular outcome
- Survey the views of other thoughtful observers and forecasters
- Distill all these views into a single overall judgment using precise probabilities
The dragonfly eye approach counteracts the limitations and biases of any one view. By seeing the problem "in stereo" from many angles, forecasters can construct a more complete, balanced picture.
Section: 1, Chapter: 5
"Diversity Matters As Much As Ability"
"What matters is having people who think differently and have different points of information, and this is really important. Having a group of really smart people who tend to see the world the same way and process information the same way isn't nearly as effective as a more diverse team." - Jonathan Baron
Section: 1, Chapter: 6
Superforecasters Aren't Afraid To Say "I Was Wrong"
One of the hardest things for any forecaster to do is to admit they were wrong. Humans are naturally resistant to acknowledging mistakes, due to cognitive dissonance and the pain of admitting error. We go to great lengths to rationalize failed predictions.
But superforecasters do the opposite. They are eager to acknowledge their misfires and examine why they happened. Some key practices:
- Meticulously tracking predictions so it's unambiguous when they fail
- Conducting "postmortems" to analyze the causes of mistakes
- Sharing lessons from failed forecasts with teammates to elevate the whole group
- Celebrating failed forecasts as learning opportunities, not shameful errors
- Revising their beliefs in light of results, even when it's uncomfortable
Superforecasters know there is no shame in being wrong. The only shame is in failing to acknowledge it or learn from it. By embracing their mistakes, they continuously sharpen their foresight.
Section: 1, Chapter: 7
"When Facts Change, I Change My Mind."
"When the facts change, I change my mind. What do you do, sir?"
This famous quote from John Maynard Keynes, encapsulates a core principle of superforecasting. Strong views, weakly held, are a virtue.
Unfortunately, most forecasters are slow to change their minds, even when the facts turn against them. In 2010, many economists warned that aggressive Fed policies risked runaway inflation. Years later, inflation hadn't appeared. Yet rather than admit error and update their models, most doubled down on their warnings.
That's a fatal error. The world changes. Surprises happen. New facts emerge. Superforecasters are always alert to how reality differs from their expectations. When gaps appear, they ask "why?" and eagerly revise their beliefs. They treat their opinions not as sacred possessions, but as temporary best guesses, always open to change.
Section: 1, Chapter: 7
Belief Updating, Not IQ, Is The Core Of Superforecasting
What makes superforecasters so good? It's not their raw intelligence. The real key is how they update their beliefs in response to new information. Regular forecasters tend to be slow to change their minds, over-weighting prior views and under-weighting new data. They suffer from confirmation bias, motivated reasoning, and belief perseverance.
Superforecasters do the opposite. When new information challenges their existing views, they pounce on it and aggressively integrate it. They are always looking for reasons they could be wrong.
Belief updating is hard; it's unnatural and effortful. But superforecasters cultivate the skill through practice and repetition, like building a muscle. Over time, granular, precise updating becomes a habit.
Section: 1, Chapter: 7
Grit And The Growth Mindset Are Essential For Great Forecasters
Becoming an excellent forecaster requires more than just raw intelligence. It demands the right mindset and determination. Two key traits:
- Grit - the tenacious pursuit of long-term goals in the face of adversity. Superforecasters have the dogged persistence to keep going even when the learning curve is steep and progress is slow.
- Growth Mindset - the belief that your abilities aren't fixed, but can be developed through hard work. Superforecasters view their skills not as static talents, but as muscles that grow with practice.
Both grit and a growth mindset are critical because getting great at forecasting is really hard. The world is complex and unpredictable. Feedback is slow and noisy. It can take years to measurably improve. Most people give up long before then.
But superforecasters stick with it. They have the grit to persist and the growth mindset to sustain motivation. They're energized by the challenge of getting better, one arduous step at a time. As the superforecaster Regina Schiller put it, "This is hard and I love doing it."
Section: 1, Chapter: 7
Teams Of Forecasters Beat Prediction Markets
Prediction markets are often hailed as the gold standard of forecasting. These markets harness the "wisdom of crowds" by having people bet on the likelihood of future events. But teams of superforecasters did even better. Across a two year forecasting tournament, superforecaster teams bested prediction markets by 15-30%.
Why did teams do so well? The ability to share information and perspectives was key. Forecasters could share ideas, challenge each other, and collectively dig deeper into problems.
Diversity also played a big role. Teams with a variety of backgrounds and thinking styles generated more creative solutions. As long as discussions remained friendly and focused, diversity led to better accuracy.
Section: 1, Chapter: 8
Great Forecasters Have A "Scout Mindset"
The best forecasters tend to have what psychologist Julia Galef calls a "scout mindset." Think of an army scout, whose job is to accurately assess the terrain and risks ahead, in contrast to a soldier, whose job is to defeat the enemy. Forecasters with a scout mindset focus on gaining an accurate picture of reality, even when it's unpleasant or conflicts with their prior views. They are:
- Actively open-minded: Eager to test their beliefs and change their minds based on new information
- More objective: Able to separate their identity from their opinions and analyze emotionally charged issues impartially
- Comfortable with uncertainty: Accept that their knowledge is always incomplete and the future is never entirely predictable
In contrast, forecasters with a soldier mindset treat information as a weapon to defend their pre-existing beliefs. They are:
- Defensive: Emotionally attached to their opinions and quick to dismiss contrary evidence
- More biased: Allow motivated reasoning and personal agendas to skew their thinking
- Overconfident: See the future as more knowable and controllable than it is
Section: 1, Chapter: 9
Balancing Urgent Action With Careful Analysis
Effective leaders face a perennial dilemma: How to balance the need for decisive action with the need for careful deliberation. Superforecasters offer a way to resolve this dilemma by rapidly generating accurate probability estimates. Leaders can use these forecasts to make smarter bets, faster.
Leaders must proactively shape how forecasts are made. Some key steps:
- Meticulously breaking down big strategic issues into specific, forecastable sub-questions
- Assigning clear owners to each sub-question to gather information and generate estimates
- Encouraging a culture of creative friction, where forecasts are carefully debated and challenged
- Making final decisions that coherently synthesize the wisdom in the various forecasts
By balancing foresight with decisive action, leaders can get the benefits of anticipation without the inertia of excess caution. Organizations can act with "enlightened boldness" - an elusive but invaluable capability.
Section: 1, Chapter: 10
Cultivating Supersmart Algorithms Is The Next Frontier
As powerful as superforecasters are today, the future may belong to supersmart algorithms. Human Gut may soon meet Artificial Intuition as silicon superpredictors absorb the combined wisdom of carbon-based superforecasters.
IBM's Watson, for instance, can comb through millions of medical records to predict disease progression far faster and more accurately than doctors. Similar systems could soon be forecasting currency fluctuations, climate change impacts, and election results.
Still, humans will likely remain essential - not as solo forecasters, but as partners for AI. The key will be focusing human insight on what machines can't do well: Probing assumptions, generating novel scenarios, and making meaning from raw data. The result may be an "augmented intelligence" greater than either alone.
Section: 1, Chapter: 11
Related Content
The Signal and the Noise Book Summary
Nate Silver
In The Signal and the Noise, Nate Silver explores the art and science of prediction, explaining what separates good forecasters from bad ones and how we can all improve our understanding of an uncertain world.
In The Signal and the Noise, Nate Silver explores the art and science of prediction, explaining what separates good forecasters from bad ones and how we can all improve our understanding of an uncertain world.
Prediction
Decision Making
Economics
Range Book Summary
David Epstein
"Range" challenges the conventional wisdom that early specialization is the key to success. Instead, Epstein argues that in our increasingly complex and unpredictable world, it is those with broad experience and diverse skills who are best equipped to thrive.
"Range" challenges the conventional wisdom that early specialization is the key to success. Instead, Epstein argues that in our increasingly complex and unpredictable world, it is those with broad experience and diverse skills who are best equipped to thrive.
Personal Development
Learning
Psychology