Wednesday, October 05, 2011

Paul Kane's and Scott Clement's understanding of statistics is essentially as low as possible.

They wrote

Only 3 percent of Americans said they “strongly approve” of the performance of lawmakers on Capitol Hill — essentially as low as possible, given the poll’s margin of error of four percentage points.


That is they said that mathematical statistics proves that we can't agree on anything. They definitely asserted that it is "essentially" impossible for 100% to agree on somethnig.

The problem is that pollsters have reported nonsense standard errors for so long that journalists have been convinced of something absurd.

In fact, the variance of the mean of a sample from a binomial distribution depends on the true probability -- in this case the fraction of the population which strongly approve of the performance of Congress. To be modest, pollsters always present the highest possible standard error corresponding to an evenly divided population. To be honest, I think they report the largest plausible standard errors due to sampling alone to hide the fact that poll responses deviate from actual voting for reasons other than sampling error.

In any case the standard error corresponding to 3% is 100% times the square root of (0.03*0.97/(smple size) or roughly 0.55%. The convention is to report a number plus or minus 2 standard errors so 3% plus or minus 1.1%. This would be a 95% interval if the distribution were normal. Using the normal approximation, one can reject the null that the true fraction of strong approvers of Congress is zero at the 95% level.

Of course if one has any sense at all, one rejects that nul at the 100% level not the 5% level, since some people said they strongly approve of Congress. The normal approximation works very well even for fairly small samples so long as the true probability is close to 0.5. Obviously it doesn't work whenever it gives an x% level which includes the hypothesis that no one in the population would say something which someone in the sample said.

But that is an advanced topic.

Next topic English. An obviously false statement is not made true by adding the qualifier "essentially." The fact is that some US adults strongly approve of our Congress. This is appalling, but they really exist. The word "essentially" was used to assert that this mere fact is negligible. This contempt for mere facticity reminds me of Hegel (them's fighting words where I come from).

Hegel did have a point. A historical movement can turn into its opposite. So the theory of statistics has become a way for some people to dismiss inconvenient data as "essentially" non-existent.

2 comments:

  1. Anonymous4:22 AM

    The slander of Hegel continues! I can understand the reason that many dislike old Georg: he is brutally hard to read, and even upon multiple painful readings he is hard to understand. But how is it that public intellectuals, having never read him, feel fine dismissing him in the most insulting manner? I blame Bertrand Russell, but it probably runs deeper than that...

    -Will

    ReplyDelete
  2. I wonder if you perceive me to be a public intellectual. The public hasn't noticed.

    I haven't read Russell. I am influenced by volume II of The Open Society and Its Enemies by Karl Popper.

    What is gained in exchange for the great effort of reading Hegel ? What did he write that was original and was not utter nonsense ? I ask for information as I have not read Hegel (except for a few passages translated by Popper).

    You can't think that no one should be dismissed without a reading -- it is not possible to read everyone (even not counting blogs). Why should Hegel be one of the top 1,000 authors whom I should read ?

    (I do not place him in the top 1,000,000 based on what I have read about him -- and the decision to read someone must be based on something other than having read him).

    ReplyDelete