Site Meter

Monday, October 31, 2016

Long Term Effects of Comeygate ?

I assume the reader is familiar with the furor related to the letter sent by FBI director James Comey to Rep. Joshua Chaffetz (Rep. Utah (in this case Rep. stands for reprensible)).

I am wondering about what effects it will have after November 8th. There has been a good bit of discussion of how anything including the words "e-mail" and "Clinton" is bad for Democrats. Many guess that Hillary Clinton will, nonetheless, be elected president (I stress that I have never made any prediction related to that possibility and have *not* jinxed anything). Some suggest that the nothing burger will affect down ballot elections.

I am wondering about the long term effects in the hypothetical case that Clinton becomes President (I make no prediction one way or the other regarding this eventuality). It is very clear and alarming that some of the people insanely dedicated to getting a Clinton are FBI agents. I guess this isn't really surprising.

I am wondering (and hoping) that the Clinton haters have gone too far. FBI director James Comey has been very widely denounced. For two days the story line has been "US government employees abuse their offices in their efforts to get Clinton). I think it is possible that the conventional wisdom will be that Clinton persecution is a serious national problem. I wonder if this might be very very useful to the hypothetical second President Clinton.

What Happened to the 4th amendment ?

The FBI now has a warrant to read e-mails sent and received by Huma Abedin which are stored on a laptop which she shared with Anthony Weiner.

Kevin Drum asks what took them so long and strongly suggests that someone at the FBI wanted to make sure that the investigation was opened and not closed before the election.

"If the FBI knew about these messages weeks ago, they could easily have gotten a warrant and begun looking at them."

I think it is clear that someone (not the director) at the FBI has deliberately used his or her position there to influence the election at the expense of the alleged public interest in investigating Abedin e-mails. I'd say this is a clear violation of the Hatch act and that after being fired, this person should be investigated for possible criminal liability.

But there is something which isn't clear at all to me.

I don't understand how that was possible let alone easy. I vaguely recall the phrase "probable cause": It doesn't seem to me that the fact that Huma Abedin uses e-mail is probable cause to believe a crime has been committed. Last I checked e-mail was not banned. The fact that they sought a warrant means the FBI agrees that they can't read the stuff on that hard disk which was written by Abedin or to Abedin without a warrant. How can they obtain probable cause to believe something about what was written without reading any of it ?

Comey said that the Abedin related data on the hard disk "appears to be pertinent": How can one describe the appearance of something when one is not allowed to look at it ?

I stress that among the things that Abedin may have written which the FBI must not read without a warrant is the following text ".gov". I don't think they can obtain any evidence that any of the e-mails is work related without a warrant. I also don't think the FBI should have been granted a warrant without presenting evidence that at least one of the e-mails is work related.

I see no way to reconcile the decision of some judge with the 4th amendment.

Needless to say, I am not a lawyer.

Orrin Kerr is a lawyer (and a conservative). His post is worth reading.

Saturday, October 29, 2016

Schadenfreude and the New York Times

I very much enjoyed the recent columns on the current status of conservatism by David Brooks and Ross Douthat. I don't have anything useful to write, because (as frustratingly usual) I have nothing useful to add to Paul Krugman's post. So I will write a useless post.

Briefly, Brooks and Douthat agree that Conservative's have lost their way

"Eventually a path for conservative intellectuals will open.

But for now we find ourselves in a dark wood, with the straight way lost." Ross Douthat

"Trump demagogy filled the void.

This is a sad story." David Brooks

I agree with Krugman that "both deserve credit for taking a critical look at their team."

I think there is now a consensus among the New York Times opinion staff. The two token conservatives just wrote that they agree with the rest of them. Note that "nytimes" appears in the three urls but the blog post nor either column includes the word "York". This shows the genuinely impressive elitist arrogance of the New York Times staff. All three present the current state of opinion among the New York Times opinion staff as the current state of informed opinion.

On Brooks I note only three things. First he really lead with his chin when he wrote

"I feel very lucky to ... The role models in front of us were people like ... Irving Kristol ..." and later " conservative opinion-meisters began to value politics over everything else." Did he really not know that Krugman would respond by typing "'the accounting deficiencies of government'" among other things ? Even sticking with the same nuclear family, he could have written "Gertrude Himmelfarb" instead of "Irving Kristol". Note that Brooks considers himself to be part of the conservative intellectual elite. I don't want to think what rhetorical errors he things non elite conservatives make.

Second Krugman referred to Brooks as "David" in his blog. I don't recall that happening before. I think it is an olive branch.

Finally Brooks wrote both "The very essence of conservatism is the belief that politics is a limited activity, and that the most important realms are pre-political: conscience, faith, culture, family and community. " This is extremely offensive (so no olive branch from me). That can only be the essence of conservatism if it is not shared by non conservatives. If Brooks's sentence means anything, it asserts that non conservatives think that politics is more important than "concience, faith, culture, family and community." That claim comes close to libel. Of course Brooks doesn't mean that. He doesn't mean anything. He correctly describes that vaguely related difference between non conservatives and conservatives later

For years, middle- and working-class Americans have been suffering from stagnant wages, meager opportunity, social isolation and household fragmentation. Shrouded in obsolete ideas from the Reagan years, conservatism had nothing to offer these people because it didn’t believe in using government as a tool for social good.

Believing or "not believing in using the government as a tool for social good" is the actual difference between non-conservatives and conservatives which Brooks incorrectly describes in his outrageous implied insult of non-conservatives. I don't think that it makes any sense to call Brooks a conservative -- he just plays one on TV. He can only claim to be conservative by dividing people into Leninists and "conservatives." I don't think he has ever even tried to explain how he can claim that, to name two examples, Paul Krugman and Hillary Clinton don't embrace his "essence of conservatism".

I think Brooks is reduced to justifying his claim to deserve to keep his excellent job with the following syllogism "conservative thinkers have something useful to contribute. I am a conservative thinker. Therefore I have something useful to contribute": I don't think he has (or can) present an argument for either of the first two claims in the syllogism. I don't think he's even trying anymore.

Douthat is marginally more interesting. He is very openly elitist: the frankness of "the pyramid that is modern American conservatism has always been misshapen, with a wide, squat base that tapers far too quickly at its peak.

The broad base is right-wing populism, in all its post-World War II varietals: Orange County Cold Warriors, “Silent Majority” hard hats, Southern evangelicals, Reagan Democrats, the Tea Party, the Trumpistas. The too-small peak is the right’s intellectual cadres," is bracing.

His column is based to a large extent on the words "populism" and "managerialism". He abuses both.

First he discusses only right wing populism. This makes it possible for him to identify populism with right wing populism and write "the toxic tendencies of populism, which were manifest in various hysterias long before Sean Hannity swooned for Donald Trump." leaving out the qualifier "right-wing". The claim is technically true. If it just so happens that all those hysterias were right wing it isn't necessary to note this fact. But Douthat should ask himself if he can think of analogous widespread left wing hysterias. I don't think they occurred. Not all 9-11 truthers support and are praised by Donald Trump. But those on the left are an isolated fringe. There are fairly large numbers of extremely angry leftists (black lives matter and occupy wall street) but their claims of fact are supported by overwhelming evidence and their policy proposals are reasonable. A solid majority of US adults have left of center populist views -- they support higher taxes on the rich and more generous entitlements. These people clearly have strong suspicions (for example that the GOP focuses mainly on serving its rich donors). But that belief is not paranoid hysteria. It is a fair summary of the available evidence. As noted by Krugman, neither Douthat nor Brooks even discuss the issue. I think that Douthat would have two separate problems. First, Krugman explains how conservative commentators could remain prominent even if they have nothing useful to say, but second, because the widespread but non hysterical egalitarian populist views demonstrate that populism isn't as prone to toxic tendencies as Douthat asserts.

I object more srongly to Douthat's use of "managerialism". First it is pretentious jargon -- Burnham wrote a long time ago and few people remember him. I think that Duthat should replace "managerial" with "informed". I think that is what he means. It is an insult to managers to call university professors "managerial" (n my experience, this is also true of professors of business management). Most managers in the USA are middle managers of private sector corporations. That's not the managerial class Douthat has in mind. He makes it clear that he means "three generations after Buckley and Burnham, the academy and the mass media are arguably more hostile to conservative ideas than ever, and the courts and the bureaucracy are trending in a similar direction." Note that somehow the mass media has been redefined to exclude Rush Limbaugh and Fox News). Note that senators, representatives, governors, and Kochs are not included among the managerial elite. He isn't referring to the powerful or the rich. Of course he is thinking of the New York Times opinion staff and complaining that he has lost all the water cooler debates for years. But he should at least try to define a large important group which is "more hostile to conservative ideas than ever."

I believe that group would be the people who know and deal with facts which don't concern them personally (everyone deals with facts in our ordinary lives). Basically, I think the problem noted by Douthat is that conservatives have lost all the debates in which arguments must be supported by facts or logic. He has decided to define the people who know the relevant facts (and are honest) as "the overclass" because defining knowledge as class is a way to discredit it.

I think it is possible to remove these problems by always adding "right wing" to "populism" and replacing "manager" with "subject matter expert" and "managerial" with "of subject matter experts". The insinuation that those who have rejected conservatism are acting as a class is a cheap rhetorical trick (and just another sign that, if you want a recent example of the typical defects of Marxist thought, you should read a contemporary conservative).

In his "to be sure" passages, Douthat writes some positive things about one conservative and some conservative thought. These assertions are completely unconvincing.

Consider the wonderful internal contradiction in "in reality political conservatism’s leaders — including high-minded figures like Paul Ryan — turned out to have no strategy save self-preservation." Douthat asserts that Ryan is "high-minded" and has "no strategy save self-preservation". He doesn't claim that Ryan used to be high-minded then changed. He doesn't admit that those who thought that Ryan was high minded were deceived by transparent flim flam. He admits that he and many others were totally wrong about Ryan but won't admit that Krugman and Jon Chait were right all along. I think that the absurd intellectual error can be eliminating by adding as single word replacing "high-minded" with " alledgedly high-minded". The extremely charitable might also accept "apparently high-minded".

Douthat also wrote

Partial revolutions there were. Free-market ideas were absorbed into the managerial consensus after the stagflation of the 1970s. The fall of Communism lent a retrospective luster to Reaganism within the foreign policy establishment. There was even a period in the 1990s — and again, briefly, after Sept. 11 — when a soft sort of social conservatism seemed to be making headway among Atlantic-reading, center-left mandarins.

I have noted that the claim that stagflation demonstrated any errors of earlier non-conservative economic thought is based entirely on lies. It is true that the boldly lying right managed to promote right wing economic ideas. The claim about "luster" is even farther from illustrating a claim in which evidence turned out to support conservative's claims. I am old enough to remember that, in the 70s and 80s the conservative position was that the USSR and communist movements were much stronger than non-conservatives admitted. It turns out that non-conservatives were totally wrong, because they came to close to agreeing with the claims of conservatives. I have no idea what "soft social conservatism" Douthat has in mind.

Thursday, October 27, 2016

Benchmark II

I wrote a post on Benchmarks which got some attention.

I defined a benchmark model as a model which we do not think is a useful approximation to reality but " which we wish to use only by contrasting it with models which we think might be useful approximations to the truth."

I'm not going to read my old post and might repeat things here.

I argued that the choice of benchmark models is not at all innocent and that economists' choice of benchmarks can affect economic outcomes.

My claim is that a chosen benchmark might give the incorrect impression of a consensus, and that non specialists are especially likely to mistake the choice of a benchmark for a scientific discovery.

I will discuss two examples of assumptions made just to benchmark which I think are dangerous. First most macroeconomists have agreed to treat technology as exogenous. I think this particular choice helps explain Paul Romer's extreme irritation with the profession. Second, most macroeconmists have agreed that long run forecasts are best made using a neoclassical model without frictions. This means that the macroeconomic discussion is about the optimal model of convergence to a given long run which is given by assumption and not analysis or evidence. This provoked Roger Farmer to be almost as harsh as Paul Romer (by the way reading that Farmer post is a much better use of your time than reading this post).

First I would like to use these two examples to describe benchmarking, that is to ask why assumptions may become conventional (almost universal) even if they are neither plausible a priori or supported by data.

It is obvious that technological progress is the product of human efforts and doesn't fall out of the sky (like genuinely exogenous meteors). There is a separate sub-field of macroeconomics called growth economomics which attempts to understand and explain the long run. The benchmark model used by business cycle macroeconomists (that is most macroeconomists) is based on the assumption that this literature reached it's epitome, zenith, optimum and telos with the Ramsey-Cass-Koopmans model in the 1960s (see above "irritation" and "Romer").

I think that it is standard to assume exogenous technology and convergence to a unique Ramsey-Cass-Koopmans balanced growth path for three reasons.

1) Business cycle macroeconomists want to focus on the business cycle. There is an agreement to divide the macroeconomic research program into growth theory and the rest of it. People who focus on the rest of it gain the ability to understand each other by agreeing to use the same model of long run growth. They (we ?) agree on Ramsey Cass Koopmans for the sake of discussion of topics other than long run growth.

2) It is almost impossible to model technological progress. If one can Understand how things are invented and predicting what will be invented, one had beter be an inventor than an economist. Here technological progress is exogenous to our models, because we don't think we can model it. So either we give up or treat it as given.

3) It is hard to evaluate models of the long run, because there is a shortage of non-overlapping long runs. The empirical literatures are very different with different data sets, techniques and necessary but not convincing assumptions. The long run evidence must all mostly concern events which occured long ago or far away. Even if economists were convinced that this evidence is relevant to first world macroeconomic policy makers, we wouldn't be able to convince the policy makers.

Now the three explanations of why we benchmark are three arguments for not taking those shared assumptions seriously. They are what we all say about things we don't think about, don't understand and don't observe.

Unfortunately, they are also the questions on which macroeconomists appear to agree. The assumption that technology is given is almost always made. It is almost always assumed that the long run expected values of (properly scaled) variables are unique and not affected by macroeconomic policy. This happens because macroeconomists do not want to talk about the causes of technological progress or the determinants of long run outcomes.

This means that (as stressed by Farmer) that natural rate hypothesis of unemployment is accepted by default. It is also assumed to be valid here in Rome where unemployment has been much higher in the past 4 decades than it was before.

More generally, it is generally assumed that macroeconomic policy can't affect the long run values of real variables. This means that macroeconmists tell politicians than far sighted statesmen will focus only on price stability, because the real variables will take care of themselves. This means that policians who don't trust themselves (or especially each other) will impose that exclusive focus as a rule.

So we have a European Central Bank with a single price stability mandate, and a Stability and "Growth" pact which forbids fiscal stimulus.

I think the effects of these policy choices have been horrible. I also think that at least part of the blame belongs to macroeconomists who wanted to focus on something else and neglected to warn policy makers that their agreements for the sake of argument weren't a scientific consensus.

Monday, October 10, 2016

Hillary Clinton's Tweet About Rape

Hillary Clinton tweeted

Hillary Clinton ✔ @HillaryClinton

Every survivor of sexual assault deserves to be heard, believed, and supported.

2:09 AM - 23 Nov 2015

This is evidence that human intelligence will be buried because it has created it's own grave digger -- Twitter. I will attempt a close reading of the tweet. I think it is generally miss-interpreted.

The widespread interpretation (sorry no links) is that everyone who claims to be a survivor or sexual assault deserves to be believed. This would be a crazy proposal. Without qualification that means deserves to be believed by police, prosecutors and jurors. This would give each of us the power to send each of us to prison for years whenever we pleased.

But that isn't what Clinton wrote. "survivor" is not the same as "person who claims to be a survivor". Taken literally, Clinton said that people who tell the truth deserve to be believed. This is a truism. It does not imply that it is humanly possible to provide every truth teller with the belief he or she deserves. We may honestly not know that a true claim is true.

Total literalism makes the statement a truism. The widespread intepretation which goes beyond the dictionay definitions of words makes it an insane proposal.

I think the literal interpretation is more plausible. It would make no sense to tweet "Everyone who accurately claims to be a survivor of sexual assault deserves to be believed" since that goes without saying. It would make sense to write "Everyone who accurately claims to be a survivor of sexual assault deserves to be heard, believed, and supported," because the support doesn't automatically and effortlessly follow from the belief.

I actually think that Clinton was presenting a goal knowing it can't be fully achieved. There is no way to believe every accurate claim of assault and disbelieve every inaccurate claim. We cant always know.

Trump's Lies about Hillary Clinton and Bill Clinton's Sexual Misconduct and Alleged Sexual Misconduct

Donald Trump went there (as we all knew he would eventually). After being caught on tape boasting about his sexual assaults, Trump claimed that he was lying (plausible) and that Bill Clinton actually did it (including rape). Together with the 22nd amendment will certainly prevent the re-re-election of Bill Clinton, but it doesn't seem to have much to do with the 2016 Presidential election. Trump regularly tries to claim that is accusations against Bill Clinton are relevant, because Hillary Clinton was a very nasty enabler who attached, shamed, humiliated and threatened the women with whom Bill Clinton conducted himself badly.

I have been concerned that reporters have reported that accusation against Hillary Clinton without writing whether it is supported by any evidence. I have also wondered if I missed something back in the 90s when I wasn't such a US political news addict. I have now concluded that Trump's accusations are completely unsupported by any evidence at all. It isn't surprising that Trump would make convenient claims of fact based on zero evidence. It isn't even surprising that journalists report slanderous claims without noting that they are false.

But, before going on, I must stress that I consider this another gross journalistic failure. I think reporters should have a rule that they just don't report accusations without independently chekcing the evidence. I do not think minimal journalistic standards are met when they quote Trump's accusations without any discussion of whether they are supported by any evidence. I am absolutely sure that many people are convinced that it is generally agreed that Hillary Clinton attacked some woman with whom Bill Clinton had sex or whom Bill Clinton harrassed. This is an accusation which I have read many times.

Only in the past day and only by deliberately looking have I found some fact checks. Trump claimed “Bill Clinton has actually abused women and Hillary has bullied, attacked, shamed and intimidated his victims.”. I know of no case in which he named the women whom he claimed were attacked by Clinton. I assume they are among the set of Monica Lewinsky, Paula Jones, Gennifer Flowers, Katharine Willey and Juanita Broaddrick. I know of no claims by Trump about the time or content of the alleged attacks, threats etc. I think a challenge for fact checkers is to add enough to the totally vague abstract accusations to make them checkable.

I will try to guess what Trump might have had in mind. Of course I don't really think he had any specific claim in mind, I think he just found it convenient to accuse Hillary Clinton.

1) This seems to be the closest to a hint of some evidence -- yesterday Juanita Broaddrick said that Hillary Clinton threatened her. In the past she has described this alleged threat in some detail

Soon after, Broaddrick says, she ran into Hillary Clinton at a political rally Broaddrick had promised friends she would attend. Hillary shook her hand and thanked her for everything she had done for Bill. To Broaddrick, the gesture felt like a threat to stay silent.

Same link as above (and by the way Jonathan Cohn and Ryan Grim's whole post is well worth reading (unlike this one))

That's it (as far as I know). Hillary Clinton is accused of shaking Juanita Broaddrick's hand and thanking her. Really. As far as I know her claim is the strongest evidence in favor of Trump's accusation. This is absolutely exactly nothing. It sure doesn't fit the standard definitions of "bullied", "attacked", "shamed" or "intimidated."

Other efforts to understand what the hell Donald Trump might have in mind (and what he wants us to believe) were made by Glenn Kessler and Michelle Ye Hee Lee working over time at the Washington post quote and check Trump

“Hillary Clinton attacked those same women, attacked them viciously.”

They don't seem to have any firm guess as to what specific claim he would make if pinned down (which he won't be). The phrase "those women" implies at least Broaddrick, Willey, and Jones. However, the only case they even consider is Lewinsky (who has never accused Bill Clinton of assaulting or harassing her).

One of the interviews that Clinton’s critics have pointed to is a Jan. 27, 1998 interview on the Today Show, saying it showed Clinton was discrediting allegations by then-White House intern Monica Lewinsky.


“I mean, look at the very people who are involved in this, they have popped up in other settings,” Clinton told Matt Lauer. “This is the great story here, for anybody willing to find it and write about it and explain it, is this vast right-wing conspiracy that has been conspiring against my husband since the day he announced for president.”


at the time of the interview, Lewinsky also denied there had been a relationship. Her lawyer had submitted an affidavit on Jan. 12 from her saying she “never had a sexual relationship with the president.”

so Hillary Clinton allegedly discredited Lewinsky by claiming (incorrectly) that Lewinsky had not committed perjury.

Note that the utterly absurd accusation that "Clinton was discrediting accusations by ... Monica Lewinsky" before Lewinsky made such allegations is still being discussed. It is completely 100% ridiculous, but it seems to be the best "Clinton's critics" can do (I definitely trust Glenn Kessler and Michelle Ye Hee Lee to be tough on Clintons).

They link to a full fact check by Michelle Ye Hee Lee here She gives the accusation that Hillary Clinton “savaged their dignity and shamed them,in an advertizement by a pro-Trump PAC three Pinocchios. The only evidence actually related to Hillary Clinton that the add presents is the interview which aired while Lewinsky was still denying having had sex with Bill Clinton. The accusation that by saying Lewinsky told the truth when under oath, Clinton "savaged" her "dignity" is utterly totally absurd.

The rest of the claims are about “The Clinton effort." Note the absense of a first name before the "Clinton." Here the claim is that Hillary Clinton is responsible not only for her husband's actions but also for the actions of his aids and advisors.

The accusations against Hillary Clinton are clearly unsupported by any evidence and utterly contemptible. The more important point is that they are frequently reported without any mention of their groundlessness, which is despicable journalism.

Thursday, October 06, 2016

Robert Frank on Frankness

Robert Frank is brilliant and extraordinarily able to both explain and critique the assumtion that the world is in Nash equilibrium.

He explains why game theory tells us " that rational observers should conclude that failure to disclose relevant information implies that the information must be as damaging as it could possibly be. " Yet Donald Trump won't reveal his tax returns.

Frank presents a plausible (indeed convincing) explanation for our failure to act a agents in Nash's model would.

First he discusses the standard case in which the signal is a guarantee made by a producer to purchasers of the product.

Theory fails in this scenario for the same reason it seems to have failed for Trump’s tax returns: Seeing is believing. It’s one thing to deduce from an abstract theory that undisclosed information is as unfavorable as it could possibly be. But it’s quite another thing to witness unfavorable information firsthand. Because our powers of attention and imagination are limited, knowing there must be a bombshell in Trump’s tax returns is actually significantly less damaging than seeing the bombshell itself. (Think of your visceral reaction when you heard about Trump’s nearly $1 billion loss, compared with the vaguely negative impression you had of his tax situation beforehand.)

Frank is so stimulating that he stimulated me to defend the empirical relevance of the concept of Nash equilibrium. I don't believe the following argument at all, but I think it holds together. It is also possible that the apparent non Nash equilibrium actions are based on the theorists misunderstanding of the game people are actually playing.

Uninterestingly, it is always possible to make up a utility function to justify any behavior. So if Trump would find it horribly painful to publish his tax returns and doesn't care much about becoming president, then he wouldn't publish them even if they looked beautiful. If voters know this about Trump, then it makes sense to not infer anything from his secrecy. This argument is silly -- it is always possible to reconcile anything with Nash equilibrium by assuming people really really want to do whatever they did.

But I think there is a less silly defence in this case. Consider a low information voter who doesn't know whether Trump has released his tax returns (they may be few in number but they exist). This voter draws no inference from the unknown fact that Trump has kept his returns secret. However, if he were to release them and reporters were to discuss the bombshells at great length, it might get through to the resolutely news ignoring low information voter. Now if everyone who is paying any attention assumes that Trump's returns are as bad as they could be (without him actually being prosecuted). It becomes rational to keep them secret to avoid coming to the fugitive attention of the lowest information voters. This is a general issue. In the simple games used to discuss the issue, it is assumed that information transmission is perfectly efficient, so consumers know what sellers have made public. That's not the way I consume. I don't check if some product has a warrenty or guarantee before buying it. I don't read the limited warrenty after I buy it. I might check (maybe) if the product turns out to be defective. Since I don't know which products come with guarantees, it isn't easier to sell me a product with a guarantee. However, I have, in my life, taken advantage of warrenties (once or twice). If all consumers were like me, it would be best for producers of the highest quality products to make no guarantees.

Since I don't bother to check, I don't know what I have and haven't been told. In the models, it is assumed that I know this automatically and that it requires no effort at all to deduce the implications.

Again, I think Frank's explanation is correct and that this story is silly.

Wednesday, October 05, 2016

Benchmarks, Models, and Hypotheses

I have been wondering about the frequent use and alarming rhetorical power of the word "benchmark". It often appears in the phrase "benchmark model," which is inconvenient, because I want to contrast benchmarks and models and don't want to write about the difference between benchmark models and other models.

Here I use "hypothesis" to refer to a collection of statements which we think might be true, such that we are eager to find out if they are all true, "model" for a collection of statements which we know are false but which might be a useful approximation to the truth, and benchmark for a model, which we wish to use only by contrasting it with models which we think might be useful approximations to the truth.

I imagine hopes followed by disappointments in the following order.

1) (compound complex) statement P might be true and P implies Q which we can observe.

2) Q is false so P isn't true, but P might still be a useful approximation to the truth because other implications of P are approximately true.

3) All the attempts to use P to approximate reality have failed, because each implication is far from the truth. P has been modified every time we try to use it, so the implication (which would be useful if correct but which is incorrect) is eliminated. We can fit and observed pattern after observing but continually fail to predict anything correctly. Work starting with P shares the fault of totally undisciplined empiricism which can describe but not forecast.

4) However, P is a useful benchmark. We can understand each of the stylized facts by remembering why each proves P false by noting how P had to be modified to fit the fact.

I think macroeconomics is reaching the 4th stage. The DSGE models which have dominated academic work for decades are based on assumptions which ( it is now asserted) were always assumed to be false. They are not especially useful for forecasting (and it is now asserted that they were never meant to be used to forecast). They offer limited guidance for policy in a crisis, because the crisis occurs exactly when one of the standard assumptions failed. However, they are still used as benchmarks. New models are presented as modifications of a standard model. One modification is made per article. Insights are obtained, because the modified assumption must cause the difference in results between the benchmark model and the new model.

My view is that the claim that a something is a useful benchmark might be false.

In fact, I think it is similar to the claim that a model is a useful approximation to reality. A model is a useful approximation if it gives approximately accurate conditional forecasts. It is used by calculating what the outcomes caused by different policies would be if the model were the truth. It is a useful approximately if the conditional predictions of outcomes conditional on policies are approximately accurate. The useful model is used to understand approximately how things would be different if different policies were implemented. Similarly a benchmark model is used to understand how things would be different if different assumptions were true. So we determine the effect of, say, some financial friction by comparing a new DSGE model with the financial friction to the standard DSGE model without it. Again the effort is to see how changing something changes outcomes. The difference might be that policy makers can't really eliminate the financial friction, so the actual outcome is compared to something which can be imagined but not achieved. However, the claims are roughly equally strong. Blanchard discusses considering a policy and considering a distortion as if they were the same sort of considering. "They can be useful upstream, before DSGE modeling, as a first cut to think about the effects of a particular distortion or a particular policy".

I think the choice of a benchmark is important because one modification is considered at a time. If implications were a linear function of assumptions, then it wouldn't matter from to which model one made a change. But, they aren't. The way in which an unrealistic DSGE model differs from the same model with a financial friction can be completely different from the way in which the real world would be different if a financial friction were eliminated.

But I think it there is a more important problem with accepting a DSGE model as at least a useful benchmark. The result has been that the vast majority of models in the literature share many of the implications of the benchmark model. So, for example, if the benchmark model has Ricardian equivalence, so do most of the modified models. The result is that if one surveys the literature and attempts to see what it seems to imply about the effects of the timing of lump sum taxes, it sure seems to imply there are probably no such effects. Most models imply no effect. The possibility of improving outcomes with temporary lump sum tax cuts is not discussed. When such cuts were proposed in the USA in 2009 (as part of the ARRA stimulus bill) many economists argued that policy makers were ignoring the results of decades of academic research. In fact, they were ignoring the implication of the standard benchmark model which was used, just as a benchmark, in spite of its poor performance.

This is the same pattern seen following the stronger hope that a model might be a useful approximation. The model is introduced with the expressed hope that it might be a useful approximation. Implications are derived. They turn out to be false. It is noted that models are false by definition, and that other implications of the model might be useful approximations. After years or decades, the model is no longer used by specialists in the field. However, it is still presented to outsiders as a useful first order approximation when it isn't. In this context "first order" means "according to the first model developed by my school of thought".

In both cases, actual practical implications are derived through a process which is completely invulnerable to evidence.

I am writing this, because the more diplomatic critics of mainstream academic macroeconomics insist that the models, which they find unsatisfactory, are useful benchmarks.

an example from a not so diplomatic critic . I think this claim is made without any consideration of the possibility that it might be false, and, indeed, a damaging falsehood. It is the least one can say if one isn't willing to tell people that they have wasted decades of their working life. But that doesn't mean that it isn't more than one should say.

A simple example illustrating the danger of changing one assumption at a time. The model is just the original Lucas supply function. The idea is that output is chosen by suppliers who don't observe the price level, so it is equal to the actual price level minus the rational forecast of the price level. This implies that output is a white noise and the location of the distribution of output doesn't depend on the behavior of the price level and therefore doesn't depend on monetary policy. With a standard assumption (or approximation) it implies that the expected value of output conditional on data available to agents is a constant which doesn't depend on monetary policy. This is the policy ineffectiveness proposition which lead Sargent and Wallace to note that, in their model, the optimal policy was to set the inflation rate to some desired target and ignore everything else. Notably this is the policy mandate of the European Central Bank. There are two counter arguments, neither of which amounts to much. The first, is that agents in the model are assumed to have rational expectations and so automatically know the policy rule. It is much more reasonable to assume that agents are boundedly rational and learn the policy rule. It was correctly argued that, given the other assumptions, this learning will have only temporary effects and that the rational expectations assumption will become true in the long run. It was later argued (based on massive evidence) that the current unemployment rate affects the future non accelerating inflation rate of unemployment, that is, that cyclical unemployment becomes structural, that is there ther is hysteresis. In this case, supply depends not only on price level prediction errors but also on the time varying natural rate. It was correctly argued that, in this model, the optimal policy was to target inflation -- the expected level of output didn't depend on policy. Here, in passing, it is worth noting that the additional assumptions mentioned above which were required to get from "location" to "expected value" become critical [Cite Pelloni et al].

But consider a newly installed monetary authority setting policy for an economy populated by boundedly rational agents who have to learn the policy rule. The authority should think what would happen she were less of an inflation hawk that people expect (not with rational expectations but with the actual beliefs of the boundedly rational agents in the economy). The result would be temporarily higher output while agents learn. This would cause permanently higher output because of hysteresis. Alone each of boundedly rational learning and hysteresis do not change the optimal policy. Together they change everything. The rule that only one change in the benchmark model is considered at a time can prevent people from seeing this. In fact, I think it has prevented most macroeconomists from seeing this.

OK Amateur partisan intellectual history after the jump.

A testable hypothesis always includes the core hypothesis of interest and auxiliary hypotheses required to obtain testable predictions (so Newton's model of the solar system includes the core hypotheses of his law of gravity and laws of motion and the auxiliary hypotheses that the sun and planets are rigid spheres and that the effects of all forces but gravity are negligeable). The problem is that the so called core hypotheses of the PIH, REH and EMH are not such thing. They are, in fact, always the same non-hypothesis that, ex poste one can find some utility fuction such that the actions of agents are consistent with rational maximization of the expected value of that utility functoin. This is true, because it must be true. It is agreed (and easily demonstrated) that the assumption that agents maximize something has no implications at all without some further assumptions about what they maximize. The core hypothesis is not falsifiable. If rejection due to failure of auxiliary hypotheses is not considered a reason to abandon the research program, then the research program is completely invulnerable to evidence.

This is a deadly problem, but I want to write about a different less important problem.

I am very irritated by the phrase "all models are false by definition". It mocks model testers who have demonstrated that some model has false implications. The implication is that the model testers misunderstood the aim of the model developers, incorrectly perceiving a model to be a hypothesis. Foolish salt water economists decided for some silly reason that the permanent income hypothesis, the rational expectations hypothesis and the efficient markets hypothesis were hypotheses. I claim that this shows bad faith. A statement is a hypothesis (with the associated scientific dignity) until it is proven false, then it turns out that it was always a model and the people who proved the statement false are silly.

The repeated use of the word "hypothesis" in the 50s 60s and 70s strongly suggests that the equations in question were not originally considered parts of models which were false by definition. Thomas Sargent's phrase "take a model seriously" sure seems to imply "treat a model as a null hypothesis." And, in fact Sargent once said (original pdf download here) that Lucas and Prescott were enthusiastic about hypothesis testing until he falsified too many of their hypotheses, and both independently said exactly that.

My recollection is that Bob Lucas and Ed Prescott were initially very enthusiastic about rational expetations econometrics. After all, it simply involved imposing on ourselves the same high standards we had criticized the Keynesians for failing to live up to. But after about five years of doing likelihood ratio tests on rational expectations models, I recall Bob Lucas and Ed Prescott both telling me that those tests were rejecting too many good models. The idea of calibration is to ignore some of the probabilistic implications of your model but to retain others.