Thursday, October 27, 2016

Benchmark II

I wrote a post on Benchmarks which got some attention.

I defined a benchmark model as a model which we do not think is a useful approximation to reality but " which we wish to use only by contrasting it with models which we think might be useful approximations to the truth."

I'm not going to read my old post and might repeat things here.

I argued that the choice of benchmark models is not at all innocent and that economists' choice of benchmarks can affect economic outcomes.

My claim is that a chosen benchmark might give the incorrect impression of a consensus, and that non specialists are especially likely to mistake the choice of a benchmark for a scientific discovery.

I will discuss two examples of assumptions made just to benchmark which I think are dangerous. First most macroeconomists have agreed to treat technology as exogenous. I think this particular choice helps explain Paul Romer's extreme irritation with the profession. Second, most macroeconmists have agreed that long run forecasts are best made using a neoclassical model without frictions. This means that the macroeconomic discussion is about the optimal model of convergence to a given long run which is given by assumption and not analysis or evidence. This provoked Roger Farmer to be almost as harsh as Paul Romer (by the way reading that Farmer post is a much better use of your time than reading this post).

First I would like to use these two examples to describe benchmarking, that is to ask why assumptions may become conventional (almost universal) even if they are neither plausible a priori or supported by data.

It is obvious that technological progress is the product of human efforts and doesn't fall out of the sky (like genuinely exogenous meteors). There is a separate sub-field of macroeconomics called growth economomics which attempts to understand and explain the long run. The benchmark model used by business cycle macroeconomists (that is most macroeconomists) is based on the assumption that this literature reached it's epitome, zenith, optimum and telos with the Ramsey-Cass-Koopmans model in the 1960s (see above "irritation" and "Romer").

I think that it is standard to assume exogenous technology and convergence to a unique Ramsey-Cass-Koopmans balanced growth path for three reasons.

1) Business cycle macroeconomists want to focus on the business cycle. There is an agreement to divide the macroeconomic research program into growth theory and the rest of it. People who focus on the rest of it gain the ability to understand each other by agreeing to use the same model of long run growth. They (we ?) agree on Ramsey Cass Koopmans for the sake of discussion of topics other than long run growth.

2) It is almost impossible to model technological progress. If one can Understand how things are invented and predicting what will be invented, one had beter be an inventor than an economist. Here technological progress is exogenous to our models, because we don't think we can model it. So either we give up or treat it as given.

3) It is hard to evaluate models of the long run, because there is a shortage of non-overlapping long runs. The empirical literatures are very different with different data sets, techniques and necessary but not convincing assumptions. The long run evidence must all mostly concern events which occured long ago or far away. Even if economists were convinced that this evidence is relevant to first world macroeconomic policy makers, we wouldn't be able to convince the policy makers.

Now the three explanations of why we benchmark are three arguments for not taking those shared assumptions seriously. They are what we all say about things we don't think about, don't understand and don't observe.

Unfortunately, they are also the questions on which macroeconomists appear to agree. The assumption that technology is given is almost always made. It is almost always assumed that the long run expected values of (properly scaled) variables are unique and not affected by macroeconomic policy. This happens because macroeconomists do not want to talk about the causes of technological progress or the determinants of long run outcomes.

This means that (as stressed by Farmer) that natural rate hypothesis of unemployment is accepted by default. It is also assumed to be valid here in Rome where unemployment has been much higher in the past 4 decades than it was before.

More generally, it is generally assumed that macroeconomic policy can't affect the long run values of real variables. This means that macroeconmists tell politicians than far sighted statesmen will focus only on price stability, because the real variables will take care of themselves. This means that policians who don't trust themselves (or especially each other) will impose that exclusive focus as a rule.

So we have a European Central Bank with a single price stability mandate, and a Stability and "Growth" pact which forbids fiscal stimulus.

I think the effects of these policy choices have been horrible. I also think that at least part of the blame belongs to macroeconomists who wanted to focus on something else and neglected to warn policy makers that their agreements for the sake of argument weren't a scientific consensus.

No comments: