Why are economic forecasts wrong so often?

The Queen of England famously asked why economists failed to foresee the financial crisis in 2008. "Why did nobody notice it?" was her question when she visited the London School of Economics that year.

Economists' failure to accurately predict the economy's course isn't limited to the financial crisis and the Great Recession that followed. Macroeconomic computer models also aren't very useful for predicting how variables such as GDP, employment, interest rates and inflation will evolve over time.

Forecasting most things is fraught with difficulty. See the current dust-up between Nate Silver and Sam Wang over their conflicting predictions about the coming Senate elections. Why is forecasting so hard?

Because so many things can go wrong. For example:

Bad data: If the data used in creating a computer model are mismeasured -- for example, a model needs the public's expectation of future output, but that expectation is measured poorly or with bias -- then forecasts will not be optimal. And sometimes the necessary data don't exist.

Bad luck: Often, the data used in a forecasting model is obtained by sampling some underlying population. For example, to measure unemployment, the Bureau of Labor Statistics uses a random sample of households. But if the researcher is unlucky and obtains a bad draw, a draw that isn't very representative of the population the sample was taken from, the forecasts of the future will be more likely to deviate from what actually happens.

Modeling errors: These come in several types. Important variables can be inadvertently omitted from the forecasting model. Or the variables may need to be adjusted to make them work with other variables in the model, for example transforming the variable into a growth rate, but the researcher fails to make the transformation. Sometimes the researcher assumes a relationship is approximately linear, but that may not be case at all.

It's also possible to include too many variables in a forecasting model based upon their ability to predict the past. But if the only criteria for inclusion in the model is that ability to predict past events, the model will generally do very poorly if you use it to try to predict events that have yet to be observed. Models with fewer variables based upon theoretical underpinnings will generally produce better forecasts than models built by simply tossing in extra variables.

Error structure: Another potential modeling error has to do with what's called the "error term," a catch-all variable that accounts for everything not explicitly included in the model. All forecasting models are approximations. It's impossible to include every possible variable that might explain unemployment, GDP growth or some other variable of interest. Such a model would be too large and unwieldy to be useful.

The idea instead is to include only the variables that are important in answering a particular question -- what will GDP growth be a year-and-a-half from now, for instance -- and omit all of the less important variables. (The "art" of forecasting is knowing which variables to include and which to leave out.) All of the variables that are omitted are collected into an "error term."

It's often assumed that this error term follows a normal "bell-shaped" distribution pattern, but there are many other possibilities. If it is assumed that, say, the errors are normally distributed when they actually follow a very different distribution, it will result in less-than-optimal forecasts.

Structural change: The optimal forecasting model may change over time, and if the change is unaccounted for, that will lead to poor forecasts. For example, suppose the Federal Reserve changes the monetary policy rule it uses to set its target interest rate. Perhaps the Fed decides to put more weight on employment and less weight on inflation. In such a case, a model based on historical data will produce misleading forecasts because it will be based on data generated at a time when the Fed used a different monetary policy rule.

If the model doesn't include such changes -- and monetary policy rule changes are far from the only way structural change can occur -- then forecasts will be off the mark.

Inherent randomness: Sometimes it doesn't matter how good the model is. Given our current state of knowledge, there's no way to forecast the future about a lot of things. Think of earthquakes and the weather. Even though we have a pretty good understanding of the science underlying both, we still can't predict them very well. That could change as our scientific knowledge improves, but for now the weather, say, a year from now or the occurrence of an earthquake is mostly a random event.

It may be that important economic events are similarly hard to forecast given the current state of knowledge.

It may also be that recessions occur only when policymakers are unable to see them coming. If the trouble is known, policymakers will take action to steer around it. Hence, it's only the unforecastable events that actually occur. If that's true, it suggests that economists need to do everything that they can to improve their ability to forecast important economic events.