Site Meter

Friday, October 01, 2010

For Skeptical Sam Contra Nate Silver

I am going to contest a claim about statistics made by Nate Silver. I am commenting on this post (read it).

My comment.

This post seems to be especially excellent even by your standards which is saying a lot. The very simple calculation has the advantage of being very easy to understand and the fact that the probability estimate is close to the one that comes out of your complicated model is very impressive.

There is, of course, a confidence interval on the estimated probabilities. If we stick to the very simple calculation (and assume gubernatorials are not like senatorials) then the 7-0 record (6-9 ahead) rejects the null that the probability of the underdog winning is 20% at the 20.9% level (I really honestly guessed that it would be around 5%). Taking a 7th root, I get that one can't reject the null that the true probability of the underdog winning is 34% at the 5% level one tailed). The 95% confidence interval on the probability (ignoring gubernatorials and with the simple calculation) is (0,40.9%). You have looked at the point estimate of the conditional probability but not a confidence interval.

With gubernatorials there is a larger sample, but 2 came from behind. I won't try the calculations (no problem for you or me either when sober).

I was planning to critique the sophisticated model. I think I will, since the simple calculation is not decisive given the sample size. OK so the critique is, what about those AAA rated CDO tranches ? Those were sophisticated models too and the calculated probabilities were totally wrong. Here one key issue is the nonlinearity of the link function (I assume a cumulative normal). If, given parameter estimates including estimated polling error (larger than pure sampling error) and estimated variance of true changes, you conclude that a Sestak win is a 1.7 or more standard deviation event, then it is not reasonable to estimate the probability of a Sestak win as 4.5%. This calculation would be correct if you knew that the parameter estimates were exact. If you integrate over a posterior over the parameters, the probability will be closer to 0.5. That's the way the cumulative normal works. Or think of macroeconomic forecasting with VARs. They have very fancy models. They don't give 95% intervals -- they give 50% intervals. Dealing with uncertainty about the parameters in a probability model eliminates extremely low probabilities.

Finally the asterix. The whole research program assumes that this time it isn't different (as it must). But it seems to be different. Already the pollster used by Kos (until you warned him off) has reported numbers which must have been cooked. That hasn't happened before. Of course I am thinking of Rasmussen. They have a huge pollster effect this cycle. They didn't before. Either they are biased or everyone else is (I am paraphrasing you). You think those are the only two possibilities. I think you should calculate probabilities by guessing the probabiliity that Rasmussen is blowing it, then adding that prob times predictions without Rasmussen + (1-that prob) times predictions using only Rasmussen.

If you are confident that either Rasmussen or everyone else is blowing it (and you wrote that with no qualifiers IIRC).then you should use that confident belief when calcuating probabilities. Anything else would be (gasp) unbayesian.

No comments: