Tuesday, October 23, 2007

Non RANDom panel attrition

This is huge. I didn't even know about the study but it is very influential. In the 70s the RAND corporation performed a genuine experiment on the effect of health insurance copayments on patients demand for health care and on health outcomes. They randomly assigned subjects different health insurance plans one of which covered all of everything and one of which required patients to pay part of the cost of health care.

They found, unsurprisingly, that patients who paid demanded less health care and, surprisingly, that they were just as healthy. This study is an important reason that there is so much support for health insurance that forces patients to pay part of the costs.

The result on health outcomes is especially odd as RAND had doctors look at diagnoses and treatment and the doctors thought that patients who faced copayments cut back on necessary as well as un-necessary treatments. One might argue that doctors don't know what works as they haven't done experiments like the RAND study and, of course, patients receiving un necessary treatment are often following the advice of doctors.
However, it is odd.

This study is still hugely influential as noted by Ezra Klein

it's almost impossible to overstate how much pull the RAND study has in health policy circles. Jason Furman's whole paper on cost sharing? Largely based on the RAND study. Robin Hanson's theories about slashing medical care in half? Largely based on the RAND study.


John Nyman argues that all this is explained by a serious statistical problem -- panel attrition. Participation in the study was, of course, voluntary. Participants could, if they wished, return to whatever health insurance they had before they joined the study (means their insurers had to agree to their participation I guess). Only 5 of 184 of the participants who dropped out of the experiment had the no cost the patient plan. This in spite of the fact that 1,294 adult participants were randomly assigned to this free plan and 2,664 were assigned to one of six different cost-sharing plans.

In itself, it is unsurprising that the most generous plan had the fewest drop outs, but this could explain the surprising results of the study. It is possible, indeed very very likely, that people dropped out because they were diagnosed with expensive to treat conditions and their pre-experiment health plan was more generous than their experimental health plan.

This would mean that their dropping out reduced costs to the experimental cost-sharing plans and improved the distribution of health outcomes for participants in the cost sharing plans. The fact that doctors thought that decisions made by participants in the cost sharing plans included foregoing needed care and should have implied worse outcomes provides more evidence for this hypothesis.

6.7% of participants with cost sharing plans dropped out, a rate which is plenty large enough to invalidate the study. If a significant fraction of them did so because of costly sickness, the results of the study are worthless.

I think RAND should have enrolled only people who were uninsured before joining the study. They would all benefit from participating and fewer would have dropped out.

One way of trying to correct the study is to look only at low income participants who were more likely to be uninsured reducing the endogenous drop out problem. Lo and behold the study concluded "that higher cost sharing leads individuals to use fewer health services, and further that (except for some lower income patients) the lower use of services had no negative health impacts."

That would tend to suggest that when the bias in favor of the cost sharing plans was reduced (not eliminated) the negative health impacts of cost sharing were partially revealed.

Now I don't know if the RAND study contains data on pre-experiment health insurance of participants. If it did, a valid analysis can be based on the RAND data using only participants who were previously uninsured.

No comments: