Experts can’t predict

“The evidence of the folly of forecasting is overwhelming.”



As the sixth-century BC poet and philosopher Lao Tzu observed, “Those who have knowledge don’t predict. Those who predict don’t have knowledge.” Yet, most economic, political, and investment experts seem to be obsessed with trying to guess the future.

The evidence of the folly of forecasting is overwhelming. So, let’s just go through a few facts and figures to give you a gist of the problem.

We’ll start with economists. These guys don’t have a clue. Frankly, the three blind mice have more credibility at what is coming than any macro-forecaster. For instance, the consensus of economists, including the Bangko Sentral ng Pilipinas, completely failed to predict last August’s 6.4 percent inflation rate spike (even when we were in it).

Investment analysts are no better. Their forecasting record is simply terrible on both short- and long- term investment issues. When an analyst first makes a forecast for a company’s earnings two years before an actual event, they are, on average, wrong by a staggering 94 percent. Even for a 12-month time horizon, they are wrong by about 45 percent! To put it mildly, investment analysts also don’t have a clue about future earnings.

If forecasts are really so bad, the natural question becomes: Why do people keep doing them? In part, it is a case of demand creating supply. If businesses, investors, and citizens want such meaningless information, then someone will provide it to them. I’ve had countless discussions with economic, political, and investment experts over the years as to the total pointlessness of issuing forecasts; their last line of defense is always “People want them.”

Yet, one might wonder if forecasters eventually get bored with being utterly wrong and would like to give up guessing the future. However, experts seem to use a variety of excuses for forecast failure that allow them to avoid admitting they can’t predict.

Wharton professor and New York Times best-selling author Philip Tetlock has done one of the most comprehensive studies of forecasters, their accuracy, and their excuses. When studying experts’ views on a wide range of political events over a decade, he found that, across the vast array of predictions, experts who reported they had 80 percent or more confidence in their predictions were actually correct around 45 percent of the time. Across all predictions, the experts were a little better than chimpanzees.

After each of the events passed, the forecasts were shown to be either right or wrong. Tetlock returned to the experts and asked them to reassess how well they thought they understood the underlying process and forces at work. Despite the incontrovertible evidence that they were wrong, the experts showed no sign of cutting their faith in their own understanding of the situation.

Instead of any self-insight, Tetlock uncovered five frequently used excuses as to why the experts’ forecasts were wrong (which reminds me of an old description of an economist being an expert who will know tomorrow why the things he predicted yesterday didn’t happen today).

The common excuses were:

1. The “If only” defense—If only the BSP raised interest rates fast enough, then the prediction would have been true. Effectively, the experts claim that they would have been correct if only their advice had been followed.

2. The “ceteris paribus” (other things being equal) defense—Something outside of the model of analysis occurred, which invalidated the forecast; therefore it isn’t my fault. Blame TRAIN, oil prices, and Donald Trump.

3. The “I was almost right” defense—Although the predicted September 6.8 percent inflation rate outcome didn’t occur, it almost did.

4. The “It just hasn’t happened yet” defense—I wasn’t wrong, it just hasn’t occurred yet. I’ll never admit I’m wrong.

5. The “Single prediction” defense—You can’t judge me by the performance of a single forecast. I have a PhD, dammit. I’m smarter than you.

The excuses allowed the failed forecasters to continue making outrageously poor forecasts without any acknowledgment that they really got it wrong.

The list of excuses above are found frequently in the world of economics, politics, and investing. Two psychologists explored the excuses produced by financial and political experts, and by weathermen, when they got it wrong. The weathermen were disarmingly honest in explaining their errors. The most frequently cited reason for their failure was “personal inexperience,” followed by an acknowledgment that they were trying to forecast the inherently unforecastable.

Strangely enough, a very different set of excuses was encountered when it came to financial and political experts. Their common defense was that they shouldn’t be judged on the basis of just one forecast, which is known as the single prediction defense, followed by the excuse that something else happened outside the scope of their model, which is the ceteris paribus defense.

So, the next time you hear an expert coming out with an excuse to explain why he didn’t predict what actually happened, you can listen and see which of these paltry defenses he uses, but I suggest that you just run—not walk—away.

[email protected]

Topics: Eric Jurado , Experts can’t predict , economy , economists , political , investment analysts
COMMENT DISCLAIMER: Reader comments posted on this Web site are not in any way endorsed by Manila Standard. Comments are views by readers who exercise their right to free expression and they do not necessarily represent or reflect the position or viewpoint of While reserving this publication’s right to delete comments that are deemed offensive, indecent or inconsistent with Manila Standard editorial standards, Manila Standard may not be held liable for any false information posted by readers in this comments section.
AdvertisementGMA-Working Pillars of the House