Wednesday, March 12, 2008

Just can't let it go. SSRI Meta-analysis meta-addiction

One of the odd things about my reading of Kirsch et al 2008 is that I don't recall a comparison of results with all trials reported to the FDA to results with only published trials. This is odd, since much of the point of the paper is to examine publication bias in the extreme situation of publication decisions being made by for profit corporation which spend tons of money on advertising. I don't know if it is odd that my reading comprehension is so minimal.

Anyway, it seems to me that it is possible to tell which studies were published from table 1 as there are references to published articles next to the protocol numbers.
Guessing that studies with such references were published and the others weren't I create a variable "cite".

Why lo and behold the average difference between the average improvement according to the Hamilton scale between patients who got the SSRI and patients who got the placebo is greater in the published than the unpublished studies.

sum dchange if cite ==1

Variable | Obs Mean Std. Dev. Min Max
-------------+--------------------------------------------------------
dchange | 23 3.696522 2.385129 -.1999998 9.4

. sum dchange if cite==0

Variable | Obs Mean Std. Dev. Min Max
-------------+--------------------------------------------------------
dchange | 12 1.45 1.739033 -1.6 4.3

Is that difference as significant as it looks ?

Sure is

reg dchange cite


------------------------------------------------------------------------------
dchange | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
cite | 2.246522 .7802415 2.88 0.007 .6591083 3.833935
_cons | 1.45 .6324977 2.29 0.028 .1631738 2.736826
------------------------------------------------------------------------------

Now that is what I call publication bias.

So how big a difference does it make ?

I argue at gruesome length below that the right way to conduct the meta-analysis is to first calculate for each study dchange, that is, the average improvement of those who took the SSRI (change) minus the average improvement of those who took the placebo (pchange).

This is necessary because different studies may have had different mean improvements for many reasons and different studies had different proportions of patients receiving the SSRI. Thus receiving an SSRI might be correlated with improvements due to characteristics of the study (such as baseline depression).

Then I argue it is best to weight with definitely exogenous weights which have nothing to do with the disturbance terms. This is necessary if the sample mean and the variance are not independent as they aren't for many distributions (the normal is an exception). I think it reasonable to use weights that would give efficient estimates if the true variance of the disturbance terms were the same in all studies. Of course I don't think that, so I assume such estimates are inefficient. However they are unbiased and plenty precise enough.

So I think that the estimate of the additional benefit of an SSRI over placebo should be the weighted average of dchange with weights equal to 1/((1/n)+(1/pn)) where n is the number of patients who received the SSRI and pn is the number of patients who received the Placebo.

If only studies with references are used (published studies I guess) this gives an estimate of the additional benefit of 3.23

If all 35 studies are used this gives an estimate of 2.64.

I conclude that publication bias biases up the estimated benefit of SSRI's by about 0.6 points on the Hamilton scale.

A simpler approach which I argue above is invalid is to calculate the average benefit of all patients who took the SSRI (just multiply the average for a study by the number who took an SSRI add up and divide by the total number of patients in all the studies who took an SSRI). This gives means of 10.04 for the SSRI patients and 7,85 for the placebo group and thus an estimated extra benefit of 2.19. The less precise procedure makes almost a large a difference as the inclusion of unpublished studies.

Finally Kirsch et al choose to weight observations by the inverse of the estimated variance of the effect in each subsample (2 subsamples per trial SSRI and Placebo).
This increases efficiency comapare to the simple procedure above, but may introduce bias. In fact it does introduce bias as demonstrated in posts below.

This gives an weighted average improvement of 9.59 with SSRI and 7.81 for placebo so an added improvement with SSRI of 1.78.

In my view in passing from the publication biased 3.23 to the final 1.78 only 0.6 of the change is due to removing the publication bias and 0.85 is due to inefficient and biased meta analysis.

if the subsample of studies with references (I guess published studies) is analyzed with the method of Kirsch et al the weighted average improvement with SSRI is 9.63 and the weighted average improvement with placebo is 7.37 so the added improvement with SSRI is 2.26.

If I have correctly inferred which studies were publicly available before Kirsch et al's FOIA request, I conclude that they would have argued that the effect of SSRI's is not clinically significant based on meta analysis of only published studies.

Update: I know I am fascinated by by medical data, but I don't think the kind non spammer who sent me this e-mail understands exactly what I like to do with them

from: medical billing
subject: Certified coders learn up to 35000 per year RE: FWD:

1 comment:

  1. That's very interesting/disturbing.

    NICE have said that they will be looking with great interest at this study when they perform their next review of the evidence base for treating depression, let's hope they notice/have their attention drawn to the flaws in this study.

    Might I suggest that a response to the paper might be in order.

    ReplyDelete