Site Meter

Thursday, May 02, 2013

Oregon Medicaid Frakt Carroll and Drum


An important study of the effect of Medicaid on health was published in the New England Journal of Medicine.  The study was based on a genuine experiment where some people were given Medicaid and other people weren't based on a lottery.  Unfortunately, the results were communicated with a NEJM  press release and not just the published article.  The results as received by the press is that Medicaid did not cause significant effect on recipients' health (except for significantly lower depression) which was interpreted as the study providing evidence that Medicaid does not improve health.

This means that somehow someone rejected the alternative.

Hero bloggers Austin Frakt and Aaron Carroll try to get valid use of statistics boots on before the error runs around the world.  Read their important post.  Then read Kevin Drum's important post  where he links to their important post and also presents, you know, the data people are arguing about.  The damage is done and can't be fully undone, but I hope that this will be a case in which the medium of blogging undoes some of the damage due to publication by press release.  The actual authors in the actual article  explain the issues very well (read the quote in the Frakt and Carroll post). The NEJM makes only an abstract of the article available for free to non journalists such as us.

In fact the raw data show better average health in the Medicaid recipient sub group compared to the control group.  The estimated benefits are not large enough to be STATISTICALLY significant, because the number of people diagnosed with diabetes and hypercholesterolemia were low.

Arithmetic tells me that 308 or 309 people were diagnosed with diabetes in the Medicaid group and 64 people were diagnosed with diabetes in the control group.  Note this effect difference is  statisically significant at all confidence levels ever used in the history if statistics (p < 0.1% I get a z-score of 11 and my back of the envelope calculation is that the p-value is less than 10 to the minus 27th which I think is called an octillionth that is one trillionth of a quadrillionth or uh you know a real small number).  The question not answered by the study is whether current standard of care treatment of diabetes has any beneficial effects.  This was not and is not an open question.

It isn't really surprising that the effects of proposing treatment  to 308 or 309 out of 6387  in the  sample of people offered a chance to apply for Medicaid rather than 64 of 5842 people in the control sample does not cause statistically significant improvement in outcomes after 2 years averaged over all people whether diagnosed or not.

The sample of people needing care which they might or might not receive was very small.  This means that, by itself, the experiment doesn't contain enough evidence to cause the FDA to approve sale of statins or insulin.  But no one sensible person would question current medical practice based on a new study which provided some evidence that current treatments work, but not enough to constitute proof if one ignores all the other data.

The probability of diagnosis of diabetes was vastly greater for the Medicaid group.  After reading the data (I read them in Drum's post) one can only conclude that Medicaid fails to improve health if one believes either that current treatment of diabetes is ineffective (in spite of massive evidence from other studies that such treatment improves health) or that the diagnoses in the study were incorrect (but the same tests are valid for assessing health) or that there is some offsetting health cost of Medicaid (maybe moral hazard as people who love needles eat sugar so they can get diabetes which they know will be diagnosed so maybe they can take insulin which is such fun).  Any reasonable person looking at the raw data would conclude that Medicaid improves health.

The problem here is a combination of the first do no harm standard in the academic literature which mandates very cautious conservative claims combined with the gross missunderstanding of interpreting "statistically insignificant" to mean "nonexistent" or "small" or "purple" or "round" or well something other than "statistically insignficant".

No comments: