Saturday, February 28, 2015

Lucas Critique upside down

I think this is even more Noah Smith bait than Mark Thoma bait.

First the twitter version of the Lucas critique. Lucas argued that if one wishes to control some variables (say inflation and unemployment) it is a mistake to look for an equation which predicts them as a function of variables which you can control then assume that the function will stay the same if you manipulate the variables. The reason is that estimated coefficients are typically not discoveries of natural laws which always hold. Typically, the effort to use the estimated function to manipulate the system will cause the coefficients to change.

A very simple example is the statement that correlation is not causation.

In particular to the extent that people's behavior depends on expectations about the future and that policy affects the probabilities of future outcomes as a function of past data, relations which depend on expectations as a function of past data will change when policy makers attempt to exploit them.

Lucas stressed that he wasn't the first to worry about this problem. The critique is sometimes called the Marshack critique.

One proposed solution is to write down models in which agents' objectives and knowledge are modelled explicitly and add the assumption that they know the joint distribution of all variables (have rational expectations). I find this proposal totally unconvincing. In particular, it is not argued that the assumptions about objectives have to be accurate. I believe that the argument is that, while the model is not reality, it is important that, if the model were reality, statistics would be consistent estimates of deep structural parameters which are policy invarint. I am totally 100% unconvinced.

People who accept this argument will often make a concession to the other approach (reduced form modelling, atheoretic empiricism, in macroeconomics vector autoregressions (VARs)). The concession is that such models are useful for forecasting. If one is just trying to predict what will happen and don't have the power to change it, then the Lucas critique doesn't matter. The parameters are invariant to what you do as you have no power.

So the view which I don't accept is structural models with optimizing agents are required for valid policy evaluation by the strong who do what they will, but other models may be just as good or better for forecasting by the weak who suffer what they must.

In fact, I partially disagree with the Lucas critiquers' concession too. It is just not true that it is best to be completely open minded if one only wants to forecast. Imposing assumptions on the data is required for forecasting too. The imposition of false assumptions may produce lower expected squared forecast errors.

I think if one is attempting to forecast using very little data to estimate parameters of the forecasting rule (or in other words if one is a macroeconomist attempting to forecast) then one must impose a lot of assumptions (to estimate at all) and it is better to impose a lot more than are needed to come up with an estimate and a forecast.

So, I think that if one is attempting to forecast using aggregate time series, it is best to impose assumptions. first if you think they are more likely to be roughly close to true than alternative equally assumptions and second it is often better to impose more. I mention the Akaike information criterion in passing.

I might sometimes accept an argument along the following lines: actual behavior is more like the behavior of this type of rational agent with simple objectives than anything else I can think of, so I will assume that we are all rational agents of that type when using the few data I have to develop a forecasting rule.

I might sometimes accept the same argument when the aim is to evaluate policy and advise policy makers. I find it about roughly equally convincing in each case. Stories about how expectations matter, and will change in a predictable way, might convince me that some assumptions are better than others (including weaker assumptions) but I don't see anything especially special about them.

In contrast, arguments about how stronger assumptions should be made when one has fewer data make a whole lot of sense to me. There are mathematical examples in which the argument is entirely valid. I find it very plausible that it is generally valid.

3 comments:

  1. Hi Robert,

    Nice post. I don't think Lucas had it in mind, but one way that making assumptions about agents is less problematic than making assumptions about macro relationships is the SMD theorem: the assumptions about agents don't matter so much at the macro level.

    I also think macro methodology leaves out a possibility: make as few assumptions about agents as possible ...

    http://informationtransfereconomics.blogspot.com/2014/12/information-equilibrium-theories.html

    ReplyDelete
  2. If a model is not predictive because the coefficients change, one solution is to replace the coefficients with variables that better align the model predictions with reality. This requires collecting data on how the coefficient (to be made into a variable) behaves in response to conditions.

    Microfoundations are not necessary if you can create measurable variables that reflect the aggregate effect of micro level decisions. The data and the variables need to have a level of certainty such that the uncertainty of the predictions made by the model is low enough to be a useful guide. Is this not what is done in practice by people who make and use models to inform policy?
    -jonny bakho

    ReplyDelete
  3. Is a model that has no predictive power actually a model? It sounds more like a description.

    Lucas does make some sense here. Every investor knows that past performance is no indication of future performance. One can build all sorts of optimizing trading "models", but these are just descriptions of the past. You odds of making money from them in the future are limited.

    ReplyDelete