Friday, October 17, 2014
Wednesday, October 15, 2014
Thursday, September 25, 2014
I will write a second comment. I will try to respond to your questions. I wasn't an economist in the 70s so (as usual) I don't know what I am talking about.
"So why didn’t this happen? Why did we have a revolution which overturned an existing methodology and temporarily banished Keynesian theory, rather than an adaptation and augmentation of what was then mainstream?"
"Was the attraction of overturning orthodoxy too strong, as it is for a minority of heterodox economists today? "
I think this attraction was very strong. There is something else. A couple of Economists from U Minnesota mentioned Harvard for no clear reason -- I think there was and is an element of Midwestern pride (neither of the economists is originally from the USA). In particular a distasteful aspect of appeals to common sense or judgment is that they are and must be assertions of intellectual authority. Math has the appealing feature that a proof is a proof and does not rely on the authority of the person stating it.
" Did an ideological imperative of dismissing Keynesian ideas play a role?" I think it definitely did. Friedman, Lucas, Prescott and Barro are very ideological. The models change but the policy proposals remain the same. Views on methodology change (Sargent asserted that Lucas and Prescott were all in favor of testing models as null hypothesis until he rejected their favored models).
"To what extent was the hostile reaction of many in the macroeconomic establishment to eminently sensible ideas like rational expectations responsible? "
I think the extremely hostile reaction played an important role (of course, I don't think that rational expectations is a sensible idea).
"Was the attraction of a methodology where at least you could be sure you were consistent too enticing, ? "
Here I would say that mathematics was enticing. Writing papers that look like Physics except the variables have different meanings is enticing.
"perhaps encouraged by increasing segmentation between theoretical and empirical macro? "
I'm quite sure this came later. In the 70s and early 80s new Classicals developed new empirical tools and did a lot of empirical work. I think theoretical macro separated from empirical macro when the data kept saying that new Classical models were not good approximations to reality.
So far, I have made no progress towards understanding how a few departments were so influential. Here I think that hard work and the passion due to fanaticism were important. The general rightward political shift also must have mattered. I think that the loony left managed to cast discredit on Republican Keynesians somehow. But I really don't understand how and why it happened.
I have two other thoughts.
If macro theory (or non theory or ad hoc models or whatever) were about as good as it is would ever be in 1973, PhD candidates and junior faculty would still have to present something new. In fields which aren't progressing a new and crazy original contribution is preferable to an unoriginal contribution. Here I think the trouble is that some questions in macro are too simple and have for decades been answered well enough to serve. I think it is very easy to fit consumption and investment without theory -- 1960s equations fit well out of sample.
If aggregates depend on expectations which are not rational nor adaptive nor anything which can feasibly be modelled, then the problems of stabilizing output and even of forecasting can't be solved. If so the only reasonble thing to do is to try something and if it fails badly try something completely different.
I think many of the questions are too easy and have been answered (but young researchers can't afford to accept this) and the unanswered questions are too hard (I fear unanswerable).
Now my usual boring pointless comment
Here I am again more certain than ever that this comment serves no useful purpose. You support an eclectic approach and not the utter rejection of all of the past 40 years of theoretical macroeconomics. I guess this must seem obviously reasonable to you and to basically everyone. However, while you have very often stated this view, I don't recall much in the way of argument, evidence or even examples.
"Microfounded models could have shown the kind of errors that can arise in more empirically based models when theory is ignored or only applied piecemeal, "
for example ? To be sure such errors can arise, one should point to examples of such errors arising. The classic example is the 1960s era Keynesian ad hoc assumption that the Phillips curve is a structural relationship. This error never happened. It may well be that going to the data armed only with common sense and verbal arguments leads to errors which would have been avoided if one used formal theory. But to claim that such errors occured, one should cite them with authors, dates, titles, Journal titles volume and page numbers.
"However I also think the new ideas that came with that revolution were progressive. I have defended rational expectations, I think intertemporal theory is the right place to start in thinking about consumption, and exploring the implications of time inconsistency is very important to macro policy, as well as many other areas of economics. I also think, along with nearly all macroeconomists, that the microfoundations approach to macro (DSGE models) is a progressive research strategy."
I note that the word "progressive" appears twice. I think it is very useful, too useful. To argue that something is progressive, one need not point to achievements. The word is consistent with the claim that the approach will in the future yield valuable fruits. I recall reading in 1982 Sargent expressing the hope that realistic micro founded models which could be taken seriously and confronted with the data would be developed in the following 30 years. I say time is up.
I also note "to start". Now the claim that something is the right place to start can't be disproven. The claim is that something good will happen. I know of no evidence against the view "(6) Changes in expectations of the relation between the present and the future level of income. — We must catalogue this factor for the sake of formal completeness. But, whilst it may affect considerably a particular individual’s propensity to consume, it is likely to average out for the community as a whole. Moreover, it is a matter about which there is, as a rule, too much uncertainty for it to exert much influence." Keynes 1936 chapter 8 section II.
2 3 final questions.
1. What evidence could possibly convince you that intertemporal theory is not the right place to start in thinking about consumption ?
2. What evidence could possibly convince you that DSGE isn't a prgressive research program ?
3. What could the data do which they have failed to do in the past 40 years ?
Wednesday, September 24, 2014
the aside is
(Of course, information loops can go the other way, too: a lot of liberals probably don't know that of the much-discussed 8 million enrollees in Obamacare's insurance exchanges, new data suggests only about 7.3 million stuck around as paying members.)It is, of course, true that a lot of liberals don't know about the 7.3 million as it is true that a (smaller) lot of liberals never heard about the 8 million -- political junkies can't imagine how little normal people know about public affairs.
That said, Klein's argument is so weak that, if I didn't trust his integrity, I would suspect that he was trying to convince people that all loopy information loops loop right. 7.3/8 >90% . The fraction is at the upper end of predictions. political junky liberals have been reading forecasts that the ratio will be about 90% for months from, for example, Charles Gaba .
Trusting Klein's integrity as I do, I think the aside really does provide strong evidence for the opposite of its apparent claim.
First the news for the ACA has to be very good indeed for "only about 7.3 million" to be the example of not as good as the rest of it news for the ACA.
Second, the hack gap is huge. New liberal alternatives to the MSM (shuch as Talking Points Memo and VOX) play it straight. In fact, liberals like Klein feel the need to add a bit of Ballance.
Well at least I sincerely honestly believe this. Klein's critique of Klein and people like him is so feeble and his obvious interest in convincing people of the opposite of his stated claim means that my confidence that he didn't make an invalid argument on purpose indicates my absolute faith in his integrity.
Saturday, September 20, 2014
Jonathan Chait recently criticised "Tom Frank, author of What's the Matter With Kansas?" . The post was devastating (as Chait's critiques often are I mean talmost always are I mean hey I can't think of an exception).
More recently he chose to write about Kansas. Obviously he felt the need to refer to "What's the Matter with Kansas" and "The Wizard of Oz." He had a problem. He sumarized his actual views on recent events in Kansas and "What's the Matter with Kansas" perfectly with the (brilliant as usual) paragraph
Brownback’s biggest mistake was to forget a lesson Frank made well: Even in Kansas, tea-party populism requires the maintenance of a ruse. One needs cultural elites and other enemies to bash in broad daylight while doing the dark work of plutocracy behind the scenes. Openly conducting class warfare on behalf of the rich is no way for a pseudo populist to get ahead.Now I haven't read "What's the Matter with Kansas" but all of the dozens of summaries of it which I have read present that as the central theme of the book. Recent polling data do as much as so few data points possibly could to suggest that Frank was totally right.
But Chait doesn't want to just agree with Frank. So he focuses on a separate issue and recent data from Kansas provides essentially no evidence one way or the other "Frank audaciously proposed that Democrats address their catastrophic standing in Kansas, and places like it, not by moving toward the center but away from it, by embracing populist economics. "
Chait correctly notes that the Democrats in Kansas didn't do this. Thus he should conclude that data from Kansas provides no evidence one way or the other about what would happen if they did do this. But having brought up the argument, he chose not to conclude that he had nothing new to say and so rather dressed up the nothing he had to say as something.
Chait wrote a sentence which is as vapid as most of his writing is incisive. "In fact, as much as Kansas provides liberals a happy story line in an otherwise difficult campaign season, it also offers a lesson that might give progressive Democrats pause."
This is, as usual, brilliant prose. But in this case the skill is used to obfuscate. We have the "might" which makes right watering down a "give ... pause". I risk nothing when I write that something should give someone who made an argument pause. It almost always is perfectly safe to propose that someone pause and think some more. If the person is standing in front of a speeding car, it is unsound to propose a pause for further reflection. Otherwise it is a proposal so mild that it is safe to make.
But Chait knows that he doesn't have new evidence for a case against populism, so he waters down the water with a "might".
Aside from picking on prose, I do have an actual argument. Chait relies on the magic word "center" without defining it. He is hinting at the argument that elections are won by racing to the center. The problem is that the word has two completely different definitions, even when used as a term of art by political scientists. It sometimes means a point midway between Republicans in Congress and Democrats in congress (as in DW-Nominate scores) or halfway between voting for Democrats and voting for Republicans (this is red district vs blue district scores). Here by construction the center is between the two parties, so for Democrats moving left is moving away from the center.
In contrast the center towards which it is good strategy to race is the position held by the median voter. Frank's point (as summarized in the quoted Chait paragraph) is that Republicans can win lots of elections, even if their positions on bread and butter populist v plutocrat issues are far to the right of the median positions of US adults. The logic is partly that they can obfuscate, partly that they can lie, and mostly that ideology is multi dimensional, and that people also vote based on the "social issues" (really the pelvic issues).
This means that a move from the current Democratic main stream towards more populism might be a move towards the views of the median voter, hence a move towards the center by this second definition. In fact a solid majority of US adults support higher taxes on high incomes, while the offical Democratic main stream position is that the ACA surtax and the expiry of Bush cuts for incomes over $400,000 for an individual $450,000 for a household are enough. Most US adults and the left fringe of elected Democrats supported increased social security pensions and increased Medicare and Medicaid spending. Basically the median US adult is markedly more populist than the median elected Democrat. I challenge Chait to find a poll and a roll call vote which don't fit this pattern.
So Frank makes the audacious proposal that Democrats would win more elections if their policy proposals were closer to the policies supported by the median voter.
"Might" is the ultimate weasel word. It should be obvious that it appears in a conclusory sentence only when the writer knows perfectly well that he (or she) doesn't have a case. Yet it appears very often.
Greg Sargent wishes he were naive enough to believe that the fact that most voters agree with the Democrats' anti ISIS policy and disagree with Republicans' alternative proposal means that the issue will help Democrats in the mid terms. In fact he knows better. So he whips out the mighty "might" and concludes "But on the other hand, if Republicans really want to make these elections about national security, you’d think it just might prompt some media pressure on GOP candidates to say what course of action against ISIS they support and to clarify whether they support another ground war in the Middle East."
I agree that this won't happen.
His post is here. My comment is there and also below.
As I began reading this post I almost had the impression that you were going to argue that the fact that the US public rejects the Republicans' proposed response to ISIS implies that the issue won't help the GOP in November, but then I thought to myself "no Greg Sargent can't be that naive." I now know you aren't.
I think by "could -- or at least should" you mean "should" and believe "should -- but almost certainly won't."
The post illustrates intellectual distress caused by being a serious commentatator advising a country (indeed a world -- it's not just the USA) which doesn't take the responsibilities of self rule seriously.
with a "Perhaps" (which means "I know perfectly well that") you concede that candidates win with slogans and "keeping their own ideas vague". I am sure you also know that US elections tend to be decided as if they were referendums on the President, that the President and his party are held responsible for everything and, finally and quite separately, fear makes people Conservative (I am appealing to psychological research but I admit that I won't provide a link).
In the end you are so desperate that you rely on the fact that "might" makes right. Your concluding sentence includes the word "might". As written it is true. Anything at all might happen. A big enough majority of voters might consider competing policy proposals and the evidence which suggests which is better. And pigs might fly.
Who can deny that somethign might happen. For all I know the GOP might make a useful contribution to the policy debate.
So yes indeed the fact that most voters agree with the Democrats' policy and disagree with the Republicans' proposed alternative might determine how the issue affects the mid terms. But that's not the way to bet.
Saturday, August 23, 2014
A catch to Andrew Gelman for correcting an attempted but inaccurate catch by Alfred Moore, Joseph Parent and Joseph Uscinski, who thought they had caught Paul Krugman in an error on the always-fun topic of conspiracy theories. Not so!
I cut and paste from Gelman
More particularly, Moore et al. criticize liberal pundit Paul Krugman for writing, “Unlike the crazy conspiracy theories of the left — which do exist, but are supported only by a tiny fringe — the crazy conspiracy theories of the right are supported by important people: powerful politicians, television personalities with large audiences.” They respond, “Krugman is mostly wrong that nuttiness is found mainly among conservatives.” But that’s not what Krugman wrote!This shows, among other things, the power of Krugman derangement syndrome -- people often right that his claim is refuted because of a fact noted in the criticized post.
I think there is a simple solution to the Moore et al etc problem. I think there should be an editorial rule that if one criticizes another for exaggerating (say Al Gore inventing the internet) oversimplifying or omitting inconvenient facts, then one is not allowed to paraphrase.
If Moore et al had been required to use only direct quotes of Krugman, their elision of his statement of exactly the fact which they claim contradicted his statement would have been obvious.
What's the problem even with the stronger rule that one must quote directly and only quote directly when one criticizes ? Would we run out of pixels ? I think the rule should be cut and paste and if you insist elide but don't ever paraphrase when criticizing.
This is now the editorial policy of this blog (catch me in comments and you will get the fame that comes when your comment is pulled back to the main window of a 50 hits a day blog).
I also wrote a comment which notes that we will obtain objectivity no sooner than nirvana, immortality and a pony.
I noticed something funny -- the figure from Kahan Peters et al (2012) doesn't illustrate your point well at all. You write "the research of Dan Kahan finding that roughly equal proportions of political liberals and conservatives in the United States engaged in irrational or “motivated” reasoning" but there is no evidence related to that issue in the figure.
My point (if any) is that one can't tell if reasoning is motivated without knowing the consequences of unmotivated reasoning. As far as I can imagine, we could only tell how vulnerable someone is to motivated reasoning if we were capable of perfect objectivity.
Teh figure could be explained if a hypothetical perfectly objective and well informed agent would answer 8 and liberals are completely immune to ideological reasoning, and equally well explained if that hypothetical agent would say 3 and conservatives were completely immune to ideological reasoning.
Until we achieve perfect rationality ourselves, we can't measure the irrationality of others. That is, I think your claim couldn't possibly be have been demonstrated scientifically given data available in 2014 AD or 1002014 AD.
I had some more pointless text here &ran out of allowed characters exactly when I tried to type "conservatives were completely immune to ideological reasoning." I figured it out (& can explain with my 90 left) but irrationally guessed ideological typos or bugs or something.
Tuesday, August 12, 2014
Brilliant. One obvious practical question is whether one gets better forecasts by averaging only gold standard polls (if any are available). Do the non traditional polls improve or worsen the average ? A purely hypothetical purely copycat pollster who took an average of other polls then added a bit of noise to disquise the purely hypothetical (non-irony alert really) would worsen the average.
Anyway a set of simple practical questions are of the form If there are N gold standard polls, the average of the N gold standard polls gives a lower forecast error than the average over all polls. Arithmetic says this can't be true if N is zero. I doubt it would be true if N is 1. It might not be true for any N (averaging is powerful and the purely hypothetical fraudulent pollster doesn't exist). So I ask when is N enough ?
There is a real problem. Note that the purely hypothetical pollster might have low forecast errors. The simple trick that averaging improves forecasts, makes it possible to make good forecasts which don't contribute anything to the accuracy of the average. In the real world, pollsters who fiddle the numbers to make their results closer to the lagged average will have lower forecast errors than those who don't even if averaging them in improves forecasts less.
Tuesday, August 05, 2014
The answer all the deficit-panic types offer is basically that we must cut future benefits. But why, exactly, is that something that must be done immediately? If you state the supposed logic, it seems to be that to avoid future benefit cuts, we must cut future benefits. I’ve asked for further clarification many times, and never gotten it.I am not a deficit-panic type demanding immediate cuts to future benefits, so I can't answer the question. That won't stop me from trying. I can think of three answers. The first is not
0) "Greece Greece I tell you." Krugman understands this argument. In fact I am quoting him putting words in the mouth of a straw man. He once believed something like the non parody version of this. This is one of the errors he pulls out when he is accused of not admitting errors. The short reply is "Japan Japan I tell you." The long one is to ask people to explain how the USA could run out of dollars. Greece can go bankrupt because it borrowed in Euros. California can and Argentina did default because they borrowed in dollars. The US Federal government can't run out of dollars. The true concern isn't for the debtor (US Treasury) but the creditors who don't want the value of their dollar denominated assets to be inflated away.
1) We must cut benefits now, because if we don't we won't cut benefits later (I favor this one). It is hard to cut future social security and Medicare benefits but it is essentially impossible to cut current benefits. If the USA reaches the point where the can can't be kicked down the road, taxes will be increased. My guess is that programs with dedicated revenue streams and trust funds will just continue if the trust fund reaches zero, with the general fund paying part of the cost. The fear of the deficit hawks is the so called bankruptcy won't amount to anything and things will just continue until investors loose confidence in Treasury securities. Even if something is then done, it will include tax increases and probably soak the rich type tax increases.
In contrast they might hope to legislate cuts in future benefits for the currently non-elderly. This means the argument cut future benefits now to avoid cutting them in the future is indefensible, because it is insincere. If the aim is to cut benefits rather than raise taxes on the rich, honesty is not the best policy. The argument that an empty trust fund will be like say Lehman going bankrupt and not like Social Security before the Greenspan commission is needed to convince people to accept distant future benefit cuts which they prefer to sharp emergency benefit cuts, but which they like less than current or future planned or emergency tax increases on high incomes.
2) Unfortunately, it might just be me first listen to me first nowwwww. If one's expertise is in long term budget forecasting, the frank statement that one knows about a problem which doesn't need immediate attention is a sure way to be ignored. No one likes the prospect of waiting 20 years before anyone will listen. Everyone argues we should listen to them now.
But I like explanation 1) better. I think it is about taxes on the rich, because it is generally about taxes on the rich.
Friday, August 01, 2014
Saturday, July 26, 2014
Friday, July 18, 2014
Tuesday, July 08, 2014
Monday, July 07, 2014
Thursday, July 03, 2014
Tuesday, July 01, 2014
I will assume that unemployment is a function of actual inflation minus expected inflation. I will also assume that people are smart enough that no policy will cause them to make forecast errors of the same sign period after period after period.
Friedman's conclusion follows if there is the additional assumption that expected inflation is a constant plus a linear function of lagged inflation. In this case, unless the coefficients sum to one and the constant is zero, it is possible to cause a constant non zero forecast error. It can't be that people are dumb enough to stick with a constant plus a linear function with coefficients that sum to anything but one if that rule is exploited to make surprise inflation always always positive.
However, I know of no one who ever wrote that such a simple model of expectations is the truth. Rather some people including Cagan, Friedman, Tobin, and Solow asserted that something like that is a useful approximation at some times in some places. Many authors expressed belief in a more complicated story in which inflation expecttions are anchored if inflation is low and variable but not anchored if inflation is high and/or steady.
I think such expectations can be modelled either as a result of boundedly rational learning with hypothesis testing or ration Bayesian updating. I will try to do so (famous last words of this post which sensible readers will read).
The monetary authority will, in fact, stick to a simple rule (but agents do *not* know that the rule never changes). It can target inflation and is tempted to trick firms into supplying more than they would under perfect foresight by setting actual inflation higher than expected inflation. I will assume that perfect inflation forecasting causes unemployment to be 5%. This is the non accelerating inflation rate of unemployment. Unemployment is linear in the inflation expectations error so the long term average unemployment is equal to the long term average expectations error.
The simple rule may be stochastic with targets based on coins flipped and dice rolled etc in secret. The monetary authority wants low unemployment and low inflation.
The question is can the long term average unemployment rate be lower than 5%
First bounded rationality with hypothesis testing. The bounded rationality is forecasting with a simple rule which might included parameters estimated by OLS on old data of. In the very simplest rule expected inflation is 2% no matter what. The hypothesis testing part is it is assumed that forecasting rules are ordered from a firwt rule to a second etc. When agents use rule n they also test the null that rule n gives optimal forecasts against the alternative that rule n+1 gives better forecasts. The switch to rule n+1 if the null is rejected a the 5% level (as always this can be any level and as always I choose 5% because everyone does). I will assume that rules are also ordered so if rule n gives persistent underestimates of future inflation, rule n+1 gives higher forecasts.
Forecasting rule 1 is forecast inflation equals 2%. Rule 2 is forecast inflation is equal to a constant estimated by OLS. Rule 3 is forecast inflation is equal to an estimated constant plus an estimated coefficient times lagged inflation. Rule 4 is a regression on two lags of inflation. the series of rules goes on to infinity always adding more and more parameters to be estimated, and includes the actual inflation rule (the monetary policy rule for this silly model).
Friedmans story about accelerating inflation at 3% unemployment works in this model. Rule 2 is flexible enough for his example. If inflation is higher than forecast inflation by a constant, the estimated constant term in the regression grows without bound.
A key necessary assumption is that agents never accumulate more than a finite amount of data about the Monetary authority. A sensible way of putting this is that learning about the Fed Open Market Committee restarts each time a new Fed chairman is appointed. To make things not too easy for myself, I assume that once agents pass from rule 1 to rule 2 they stick with it using all data to estimate parameters. The data used to test the current rule against the next one are only those accumulated with the current chairman. I will assume Chairmen are replaced at known fixed intervals of say 100 periods of time.
Fed open market committe members know all this. They can set inflation so the 2% forecast rule is never rejected against the estimated constant. The optimal strategy will be mixed, that is they will randomize inflation so it isn't too easy to learn what the best estimated constant is. I will assume they set inflation equal to a constant plus a mean zero white noise disturbance term (to be clear the expected value of the random term conditional on lagged information is always zero)
Clearly FOMCs can achieve set inflation to be 2.000001% plus a mean 0 variance 1 constant term without getting caught before the chairman's term expires. This means that the long run average unemployment rate can be less than the NAIRU. This means that there is a long run tradeoff between average unemployment and average inflation.
update: we have a winner so the offer immediately below has expired.
if anyone has read this far, please tell me in comments and I will praise you to the sky in a new post]
Friedman's argument can be true, unemployment can depend only on expectation errors and there can be a long run inflation unemployment tradeoff. There is a big difference between trying to achieve constant unemployment lower than the NAIRU and trying to achieve average unemployment lower than the NAIRU. Friedman also implicitly assumed that the monetary authority never changes and is known to never change.
Basically his implicit assumption is either the Fed can set unemployment to any constant or there is a natural rate. This doesn't follow for many many reasons. I just described one.
OK I talked about Bayesian learning. This post is already way too long. The idea about Bayesian is we start with a prior with a huge mass at inflation is 2% plus a mean zero disturbance term. Then there are positive prior probabilities on a huge variety of other models. However all of the other models have time varying coefficients which follow random walks. This means that the forecast conditional on belief in model N depends on parameters estimated with exponentially weighted lagged data. This means that given the 2.0000001% plus noise rule, the ratio of the likelihood for those models to the likelihood with the 2% plus noise model doesn't growth without bound. This means that the posterior keeps a huge mass on 2% and there is a long run tradeoff between long run average unemployment and long run average inflation.