Rob Stein reports on the bitter controversy triggered when the FDA decided that it needed more data before approving Provenge® an immunostimulatory treatment for prostate cancer. The question is whether the apparent benefits from Provenge could be due to chance.
I think the question is answered. Provenge provided statistically sinificant benefits in a phase III trial and should be approved.
I have a proposal. Beta testing. I think it would be better if the FDA approved drugs for treatment provisionally with a requirement to report use, treatment outcome and side effects for each patient and a final decision based on this large uncontrolled study. As it is once the FDA approves a pharmaceutical it is very hard to withdraw approval. I think it is only done if there is statistically significant evidence of previously unknown side effects and not for ineffectiveness. In any case the burden of proof is on those who advocate disapproval.
To support my claim that the FDA is wrong, I will look at one data point (from a press release).
In Study D9901 [snip] 34 percent of patients receiving Provenge were alive at 36 months compared to 11 percent of patients receiving placebo
A 36 month final survival analysis was required per the study design. The study randomized 127 men to receive three infusions of Provenge or placebo over a four-week period.
I will assume that 64 patients got the provenge and 22 survived 34.4375 % and that 63 got placebo and 7 survived 11.11... %. Under the null that Provenge doesn't work, the expected number of survivors getting provenge would be 64*29/127 = 14.61 with standard error root(29*64*63/(127*127))= 2.69 so the 22 is 2.74 standard deviations above the expected value under the null. For 29 observations a binary with probability almost exactly one half is almost indistinguishable from a normal so the probability that the improved survival is due to chance would be less than 0.00305 (one tailed). This is a fairly low number. Two tailed (testing the null against the alternative that provenge maybe helps or maybe hurts) gives 0.006.
Now it is important that, while 36 month survival was one of test statistics required per study design, there were others. A very cautious approach due to Bonferroni is to assume that rejection of the true null under each test is independent (this is of course totally false and gives a probability of rejecting the null using the Bonferroni approach lower than the stated size that is for the standard confidence level of 5% incorrect rejection occurs less than 5% of the time)).
We are left with two possibilities. The FDA approved a trial with at least 9 test statistics (that is they don't know statistics) or they decided that standard confidence intervals aren't good enough to stop forcing dying people to wait for a promising treatment (that is they don't know statistics).
What is going on ? The FDA seems to have a "better safe than sorry" rule in which asking for more data is the safe option. Since Provenge has only mild (and non life threatening) side effects and the alternative is certain death, this seems an odd definition of safe.
Part of what seems to be going on is that many people are totally confused by the Neyman Pearson approach (significance levels and all that). It is very common to treat insignificant evidence against the null as evidence in favor of the null. The Bonferroni approach is very cautious about rejecting the null. If some hypothetical organization which I will call the DFA were to have used the rule that all test statistics must be significant or the rule that most test statistics must be significant, they would have made a gross mistake.
It also seems likely to me that the FDA is cautious about approving drugs, because it is very hard to disapprove them. I propose beta testing (would be phase IV) in which a drug is tentatively approved but treatment outcomes (and of course side effects) of all patients must be reported with a decision about regular approval in a year. This is burdensome but at least patients would be getting the promising treatment.