We all make predictions and are encouraged to do so: from everyday decisions on whether a situation will harm us, to conscious estimates on how the stock markets will act. In my world, teams are regularly asked to predict when something will be ready for market or how big a task is.
So it was with great interest that I listened (and re-listened) to a podcast from the authors of Freakonomics entitled “The Folly of Prediction”. It’s well worth a listen, but here are some of the points I found most interesting and relevant.
Why we make predictions
There is a demand for predictions. We need them to survive (e.g. will this situation cause me harm?), but we also use them to make money (e.g. how will the price of oil change?) and gain notoriety (e.g. I predict that Swansea will beat Hull 2-1 this weekend, with Routledge scoring the first goal in the 49th minute).
Pundits (especially on finance and sport) are used a lot in the podcast. And this is the first take-away: Pundits gain notoriety from BIG predictions; nobody is interested in “I predict that the world will stay the same for the next 12 months”. But who is recording the accuracy of these predictions? Who reminds us of the predictions that didn’t happen? Nobody, of course, because the person reminding us about the big prediction that came true is normally the person who made the prediction; there is no incentive for them reminding us of the others.
The world is not predictable – however much we want it to be
As the world gets more organised and formulaic (e.g. Big Mac is the same in New York, Paris and Moscow), we expect the world to be more predictable. We “look for a neat pattern, even if no such pattern exists”, as Stephen Dubner puts it. Or, as Philip Tetlock (psychology professor and author) says, “people try hard to impose causal order on the world around them, even when those phenomena are random.”
“The future is full of random events” – Stephen Dubner
Tetlock tracked the predictions of 300 ‘experts’ over 20 years (about 80,000 predictions). They answered questions like “At the next election in [country], will the current majority political party retain, lose or strengthen its status?” The ‘experts’ did a little better than pure random guessing, and better than a group of undergrads, but they were worse than an extrapolation algorithm that predicted the world would remain unchanged. In summary, the experts “were systematically overconfident” in their ability to predict.
Being bad, but getting away with it
Steven Levitt points out that, for many experts (such as pension fund managers), “the goal of prediction is to be completely within the pack”. As long as everyone else made the same mistake, they aren’t deemed incompetent.
Also, the word “could” protects pundits from accountability. Tetlock suggests the possible meanings people attach to “could” range from a probability of 1% to 60%!
How to be better at doing it
According to Tetlock, the sign of a good predictor is a “capacity for constructive self-criticism”. If you can’t answer the question “what would it take to convince you that you are wrong?”, then you are veering towards dogmatism (a sign of a bad predictor). The ability to re-assess ones predictions is key. I feel a reference to ‘The Cone of Uncertainty’ coming on!
Along with re-assessing your prediction, there are two other suggestions for improving predictions.
Firstly, measure their accuracy. Although unlikely to see pundits with the equivalent of a batting average, you can measure your predictions/estimates with little effort. [I refer you to Douglas Hubbard’s How to Measure Anything: Finding the Value of Intangibles in Business for help in this area]
Secondly, base your predictions on accurate information. Prediction markets (i.e. futures markets like the stock market) reward people for making accurate predictions: you earn money when you are correct, and lose money when you make a bad call. Prediction markets use what people are betting on for forecasting.
Question: Why don’t you bet your money on whether the price of oranges is going to go up or down next month?
Answer: Because you don’t know anything about that market
In prediction markets, people who know about the topic speak up, and people who don’t know about the topic shut up. [See the comments below for some interesting developments in this]
So, the starting point is to (a) measure the accuracy of your estimates, and (b) make sure that you are basing those predictions on information from people who actually know what they are talking about.
They mention an intriguing programme run in the US by the Defense Advanced Research Projects Agency. In 2000, DARPA looked into using prediction markets to forecast geo-political situations in the Middle East. They wanted to use prediction markets to predict governmental changes, riots, terrorist attacks, wars, etc, and use that information to see how their actions could affect them. They wanted the people closest to the action to bet on the situation in order to get more accurate predictions. However, the programme was canned due to a media frenzy (“Betting on terrorism!”). Tetlock is now working on a fascinating project called The Good Judgment Project which uses predictions from thousands of normal people around the world to predict geo-political events … and apparently the collective forecasts are surprisingly accurate!