What’s the point of forecasting? This is the question posed for me by Michael Story’s account of superforecasters’ predictions for the course of the pandemic.
Take this:
Our central estimate that the death toll will peak at 1,278 with an 80% confidence interval of 892 to 5,750.
So what? It’s not all clear how this affects the debate about how tight the lockdown should be. That depends upon how the virus is transmitted; the trade-off between mental and physical health; how far the economic effects of the lockdown can be mitigated; and so on. Yes, that claim that the daily death toll could be 5750 is alarming, and speaks to the need for a tight lockdown. But you don’t need any precise number to make that case. The mere chance of a high death toll does as well.
Similarly the forecasters’ 80% confidence interval that between six and 20 million people will be vaccinated by end-February is also irrelevant for policy. What matters is that the vaccine is delivered to as many people as possible. That’s a matter of logistics, not of forecasts.
If the forecasts are irrelevant for policy, what use are they?
Yes, if you’re spread-betting on the final death count, they might be useful. But few of us are doing this. And anybody who is faces the problem that superforecasters’ views should in theory get quickly embedded into prices. Which means that by now they don’t help us beat the market. We’re no better off.
Another potential use of forecasts is that they test hypotheses: a correct forecast should strengthen our confidence in the theory upon which it rests, whilst a wrong forecast should weaken it. We must therefore always ask of a forecast: was it right or wrong, and if so why and what do we learn from this? as Jonathan Portes does here and I did here.
But again, it’s not obvious how the superforecasters help us in this respect. Let’s say we see low numbers of vaccinations, so the superforecasters are wrong. What hypothesis is then weakened? Sure, faith in the government’s logistical ability would be – but we don’t need superforecasters to assess this, merely progress in vaccination against the government’s own targets.
My reaction to this piece, then, is the same as that to very many forecasts, such as those for November’s US presidential election: there’s no point to them, as we’ll find out soon enough anyway.
In fact, for me, exercises such as this miss four more interesting issues in forecasting.
First, is there any predictability in human affairs, and if so what is its origin?
Take for example, my chart. It shows that medium-term returns on UK equities have been highly predictable by a simple ratio of retail sales to the All-share index: this ratio predicts decent returns over the next three years (as I say, a forecast should be a test of a hypothesis).
This measure works for two reasons. First, consumer spending is, partly, forward-looking: if we anticipate good times we’ll spend more than if we anticipate bad, and on average across millions of consumers forecast errors largely cancel out. Secondly, equity investors do not fully price in this wisdom of crowds, causing equities to be under-priced when retail sales are strong relative to share prices. (For more sophisticated versions of this theory, see Lettau and Ludvigson (pdf) and this paper from the Bank of England).
What we have here, then, is evidence of predictability and a reason for it. What are the analogues of these in other domains?
Secondly, who are the best forecasters: foxes (who know many things) or hedgehogs (who know one thing)? Philip Tetlock argues that foxes do better. This is not true in all domains, however. In my example, the hedgehog who looked only at the retail sales to All-share ratio would have done better than foxes who tried to process all possible information about prospective market returns. Also, if you want to know the chances of a recession, a single glance at the yield curve does far, far better than economists’ foxier forecasts. What works in one context doesn’t work in another.
Thirdly (and perhaps relatedly): do forecasts change the environment or not? Most of those discussed by Tetlock do not: this is true of the Covid forecasts as well. Other forecasts, however, do. Optimistic forecasts for (say) where Bitcoin or Tesla shares raise their prices today and hence reduce future returns. Investor sentiment – which is correlated with share price forecasts – can predict future returns.
In such cases, we need a different approach. We should ask not: “what will happen?” but “what information (if any) are market participants ignoring?” To use a useful anology in Meir Statman’s Finance for Normal People, superforecasting is a game you play against the environment whereas investing is one you play against other investors. These are different things.
Fourthly, do we need forecasts at all? Sometimes we don’t. I’ve shown that investors who diversified very simply have done perfectly well without forecasting anything. In the same spirit, the stock-picker who simply bought momentum or defensive (pdf) stocks would over time have out-performed those who tried to forecast returns for individual stocks or the market.
In other contexts, what matters is not a point forecast but rather the distribution of possibilities. The case for zero interest rates isn’t based on a particular forecast, but upon the balance of risks: there’s a risk of sustained high unemployment but less risk of soaring inflation.
The best policies should not rest upon a forecast. For example, a basic income can and should be justified on many grounds other than that we need one because automation will destroy jobs (maybe it will, maybe it won’t). And one reason why we need stronger counter-cyclical automatic stabilizers is that we cannot rely upon a predict-and-control approach to macro policy.
Now, I don’t say all this to attack superforecasting. Mr Story is bang right to say that bad forecasters should be driven out of the marketplace of ideas: the fact that they are not is because of perverse incentives in the media. And a lot of advice on how to be a better forecaster is valuable as advice on how to think better.
Forecasts are perhaps the least interesting part of the superforecasting project.
Basic Income also supports basic demand which stabilises jobs.
Posted by: Big Nose | January 13, 2021 at 10:43 PM
The benefits of forecasting surely become apparent when outcomes differ from plans based on the forecast. For instance, if Gordon Brown had taken note of the 2000/2001 deviations from Treasury forecasts he would have re-examined Labour's fiscal strategy and avoided his government's and the country's later troubles.
Posted by: Frank Little | January 14, 2021 at 08:20 AM
Forecasts are like nanny's hand, something to hold on to in a scary world. Or like an insurance policy - 'well they said it would be OK', CYA.
At one time I did industrial market research forecasts. Really little more than guesswork, but what else does anyone have when building a big factory for widgets. Stick to real estate, you know it makes (made?) sense.
The trick was not to get found out...
Posted by: jim | January 14, 2021 at 08:39 AM
January 13, 2021
Coronavirus
UK
Cases ( 3,211,576)
Deaths ( 84,767)
Deaths per million ( 1,245)
Germany
Cases ( 1,980,861)
Deaths ( 44,404)
Deaths per million ( 529)
Posted by: ltr | January 14, 2021 at 02:21 PM
I have not spotted any link to it, so I am surprised that this article does not mention NN Taleb and his (low) opinion of superforecasters, and in particular this point:
https://forecasters.org/blog/2020/06/14/on-single-point-forecasts-for-fat-tailed-variables/
“Science is not about making single points predictions but understanding properties (which can sometimes be tested by single points).”
https://twitter.com/nntaleb/status/1210328617399533572
“That the right balance between noise and signal REQUIRES producing forecasts as if they were prices, buying and selling at every forecast. In other words SKIN IN THE GAME while forecasting. Otherwise it is bullshit.”
https://twitter.com/nntaleb/status/1299069089353170944
“In other words your forecast reflects the uncertainty built in because it raises low probabilities (below ½) and lower high prob so you converge to ½ under maximal uncertainty.”
Posted by: Blissex | January 14, 2021 at 09:28 PM
I am surprised that this article does not mention NN Taleb and his (low) opinion of superforecasters...
[ Absolutely; terrific comment. ]
Posted by: ltr | January 15, 2021 at 01:19 AM
January 14, 2021
Coronavirus
UK
Cases ( 3,260,258)
Deaths ( 86,015)
Deaths per million ( 1,263)
Germany
Cases ( 2,003,985)
Deaths ( 45,492)
Deaths per million ( 542)
Posted by: ltr | January 15, 2021 at 07:54 PM
January 15, 2021
Coronavirus
UK
Cases ( 3,316,019)
Deaths ( 87,295)
Deaths per million ( 1,282)
Germany
Cases ( 2,023,779)
Deaths ( 46,537)
Deaths per million ( 554)
Posted by: ltr | January 16, 2021 at 02:56 PM
January 16, 2021
Coronavirus
UK
Cases ( 3,357,361)
Deaths ( 88,590)
Deaths per million ( 1,301)
Germany
Cases ( 2,035,657)
Deaths ( 46,978)
Deaths per million ( 560)
Posted by: ltr | January 17, 2021 at 02:19 PM
Here: https://www.youtube.com/watch?v=BxinAu8ORxM&feature=emb_logo
is a 'forecast'?
“ . . . our best estimate is that the net energy
33:33 per barrel available for the global
33:36 economy was about eight percent
33:38 and that in over the next few years it
33:42 will go down to zero percent
33:44 uh best estimate at the moment is that
33:46 actually the
33:47 per average barrel of sweet crude
33:51 uh we had the zero percent around 2022
33:56 but there are ways and means of
33:58 extending that so to be on the safe side
34:00 here on our diagram
34:02 we say that zero percent is definitely
34:05 around 2030 . . .
we
34:43 need net energy from oil and [if] it goes
34:46 down to zero
34:48 uh well we have collapsed not just
34:50 collapse of the oil industry
34:52 we have collapsed globally of the global
34:54 industrial civilization this is what we
34:56 are looking at at the moment . . . “
Posted by: James Charles | January 18, 2021 at 09:14 AM
NN Taleb is a prime example of the Toby Young phenomena.
The Good Judgment Project (GJP) is a project "harnessing the wisdom of the crowd to forecast world events". It was co-created by Philip E. Tetlock (author of Superforecasting and of Expert Political Judgment: How Good Is It? How Can We Know?), decision scientist Barbara Mellers, and Don Moore, all professors at the University of Pennsylvania.
It was a participant in the Aggregative Contingent Estimation (ACE) program of the Intelligence Advanced Research Projects Activity (IARPA) in the United States.
Predictions are scored using Brier scores.The top forecasters in GJP are "reportedly 30% better than intelligence officers with access to actual classified information.
It is not about being clairvoyants but using intelligence in a structured and disciplined way and removing all traces of ego to see what is more likely than other events to come to pass.
A 30% advantage over competitors in business or sports betting would make those superforecasters very rich indeed.
Posted by: joe | January 18, 2021 at 11:43 PM