update: Wren-Lewis kindly linked to this post (which is here not at AngryBearblog.com because I think it isn't worth your time). I couldn't resist writing a longggggggg comment on his blog. Then I came here to add this update and found a shrt cmmnt on this post. Brevity is the soul of wit. The patient reader can scroll past the long boring post to get to the long boring comment on the link to this long boring post.
He concedes that a case can be made because the Lucas critique.
Another response might be that we know for sure that the eclectic model will be wrong, because (for example) it will fail the Lucas critique. More generally, it will not be internally consistent. But we also know that the microfounded model will be wrong, because it will not have the right microfoundations. The eclectic model may be subject to the Lucas critique, but it may also - by taking more account of the data than the microfounded model - avoid some of the specification errors of the microfounded model. There is no way of knowing which errors matter more.I throw the usual cow in comments.
I think you understate your case. I think that misspecified micro founded models are vulnerabile to the Lucas critique. I guess at first blush my claim seems strange, as the Lucas criitique is taken very seriously, everyone agrees that all existing models are misspecified, and yet many people think they have dealt with the Lucas critique. But I also think it is easy to prove my claim.
Let's consider consumption. It is claimed that we can avoid the Lucas critique by estimating deep parameters say a rate of time preference and an intertemporal elasticity of consumption. Then we can reliably forecast say the effect of a permanent shift in the real interest rate on the rate of growth of consumption. Since the deep parameter is structural, it won't change when the policy changes. So the forecast effect will be OK provided we have estimated the elasticity with enough data on oh say aggregate consumption or maybe consumption of non durables and services. Ooops that would give two quite different estimates which can't both be OK. The estimate made using available fluctuations in real interest rates don't give the right prediction about the effect of a permanent increase unless the model is correctly specified. If we incorrectly assume non durability we will systematically overestimate the effect of the new policy on the growth rate of consumption.
Another very similar (but opposite) example. What if in the real world there is habit formation ? Again fitting the available data with temporary fluctuations in r does not imply accurately forecasting the effect of a permanent shift in r (this time we will underestimate the effect on the growth rate of consumption).
To avoid the Lucas critique, it has to be that the parameters which we model as deep structural parameters of tastes or technology are, in fact, deep structural parameters in real world utility and production functions. We can be confident that our model is not vulnerable to the Lucas critique, if we can be confident that it is the truth. No problem is or can be resolved by the fact that we use mathematical optimization after making false assumptions to get behavioral implications such that we can estimate parameters and fit the data.
update: My commmmmmmmmmmmmennnnnnnnnnnnnnnt on his kind post.
Thanks for the link. Below, I will try to explain what my post clearly failed to explain. It is, at most, a semantic point (so not worth the bother to scroll down).
I have some possibly more relevant thought. The first is yes exactly "time": I recall 31 years ago reading an interesting appealingly modest argument made by Sargent for the then fairly new approach to macroeconomics. He said then in 1982 that the models they were working with were clearly not realistic, but there was hope that there would be models which were both internally consistent and realistic in thirty years.
Now I'm pretty sure he had been saying "30 years" for several years before I read the phrase (it was in an interview in some magazine like Business Week or Fortune) so I'm pretty sure that time has been up for a while. I note also that researchers are not normally given 30 years to show results.
I have two disagreements with Yates which I think go beyond yours. First I do not see any basis for the claim that the best way to forecast is with a VAR. VARs are used a lot. Forecasting performance is miserable. It seems to me that Keynesian economists such as, say, P Krugman, S Wren-Lewis and C Romer outperform VARs. The evaluation of the ARRA (US stimulus) is based on deviations from forecasts based on a VAR. The deviations are significant and very similar to those predicted by C Romer. I think the paleo Keynesian approach provides better forecasts than pure time series analysis and better policy guidance than DSGE models in which (for example) liquidity constraints are assumed away.
Finally there is a possible aim in addition to forecasting and policy analysis. There is also the quest for truth for understanding for its own sake. A strawman might argue that ad hoc models are useful for forecasting and policy analysis, but there is also pure science which is a quest for knowledge for its own sake. The mildly interesting thing, is that no actual person makes that argument. The internally consistent micro founded models are very obviously not the truth. The field demands some sort of purity, but does not claim to be pure science, but rather the quest for useful approximations.
OK so to boring semantics and "Lucas critique." I understand that the point of my post (if any) was hard to understand. In any case it is purely semantic, almost certainly a distinction without a difference. I think that "These misspecification errors" Are "errors due to the Lucas critique".
A model avoids the Lucas critique if the estimated parameters of the model correspond to slowly changing deep aspects of tastes and technology. The model does not avoid the Lucas critique if the modeller imagines a fictional world in which those parameters are taste and technology, Parameters which are, incorrectly, interpreted as measurements of some aspect of tastes and technology in fact incorporate agents beliefs about future policy. ( I am thinking of rho, sigma, the elasticity of labour supply, the elastictity of subsitution of capital and labour (always assumed to be one by macroeconomists) and the share of capital)). For different public policy, different parameters would be estimated. Internal consistency does not at all imply that paremeters are policy invariant. I assert that there is no connection between concern about the Lucas critique and a desire for internally consistent micro foundations.
I applaud your politeness. However, I can't amange it. My problem with the micro founders is that they also argue that models are false by definition so it makes no sense to test their assumptions. I think a case can be made to settle for nothing less than a hypothesis with testable implications which has not been rejected. But I thinkt there is a plain logical INconsistency* in arguing that it is vital for models to be internally consistent but no need for the assumptions to be true. To try to be brief, I think that the Lucas critique is the Lucas critique of Friedman's methodology. That in the one paper Lucas said nothing ore and nothing less than that Friedman was wrong (and that Friedman could have written the same after Lucas's "Econometric Policy ... " paper explaining why he found the argument unconvincing. If anyone is interested, I go on at length here
I applaud your politeness, but have trouble being polite to fanatics for logical consistency whose arguments for what they call logical consistency are, as far as I can tell, obviously logically inconsistent.
*update: typo corrected thanks commenter