This post is of interest only to me. I promise.
For decades I have been puzzled by the widespread conviction among macro economists that models which yield poor forecasts also provide useful guidance to policy makers. I don't see how one can conclude that a hypothesis has any value whatsoever if it doesn't fit the data. It is true that a model which is rejected by the data might be useful. "Might" makes right. It also might be useful to sacrifice chickens and examine their innards. But that's not the way to bet.
I thought to myself "I can't make a statement about probabilities. I can't say that a model which gives better out of sample forecasts also gives better forecasts for the effects of different policies." This is obvious and well known, so there is no point in giving an example. So I will give an example.
First note I am saying that I can't prove that the macroeconomists with whom I disagree are wrong.
The reason I can't say anything definite, even anything definite about probabilities, is that outcomes depend both on policy and exogenous factors. I will assume that the effects are additively separable. If you have two models A and B, model A might have better estimates of the effect of policy but worse estimates of the exogenous factors. So it will give better estimates of the difference in outcomes if one applies policy 1 or policy 2. The poor forecast of the exogenous factors can add the same error to the forecast conditional on policy 1 and conditional on policy 2. That means it won't affect the estimate of the difference between the outcome with policy 1 and with policy 2. That's what we want to know. This is totally obvious so I will make a more explicit example.
Forecasts of employment in the USA in January February and March 2012 were incorrect, it was higher in January and February than forecast and lower in March when forecast based on employment in January and February. It is widely agreed that the reason is that the weather was unusually warm in January and February 2012. I will assume this is true. I will also assume that macro policy doesn't affect the weather significantly in the medium term (so I will assume no global warming -- this is just an example).
Now lets go back to model A and model B. Model A is a standard macro model and a good one. Model B has two modules, a bad macro model and an excellent model of the weather which gives excellent forecasts. So model B gives better forecasts for current policy. But it gives worse forecasts of, say, the effects of QE III.
Now if one added the weather forecasting module to model A, one would have the best available model. This would give better forecasts than model A even in the cheating competition when actual weather which occurs after the forecast is made is plugged into the model. The weather module reduces the spread of the disturbance terms in model A + weather so the parameter estimates are more precise.
In this case, it is easy to see what to do with models A and B. In the real world it is hard.
I am sure no one will learn anything from this post. Everyone who knows what I am talking about knows my conclusion.
1 comment:
But we both know there are a lot of models out there aren't useful for forecasting and which contain such unrealistic assumptions that they can't even be said to be modeling the world we live in. But they are very popular.
Post a Comment