This AHCPR report is bullshit. Here's why:
Science journalists shouldn't "play scientist." It's not my job to evaluate the actual statistics, nor am I competent to do so. But before I tell doctors that a government agency says it's OK to treat their patients for a life-threatening illness with a cheaper drug, I do have a responsibility to check for certain warning flags. And I also have a responsibility to check with people who understand statistics and clinical practice better than I do.
Warning Flag Number 1: Scientists have to publish their data for other scientists to review. The AHCPR didn't publish this report. They merely published a summary and a press release of the report, and handed out the press release and summary to reporters. (Remember cold fusion?) On principle, I never write a story about a report like this without reading the report. The AHCPR Fedexed me a photocopied draft. I read it all weekend. The data in the report didn't seem to support the summary or the press release.
The AHCPR said that they would be publishing the full report in "mid-1999," in print and on the Internet. As of this writing (13 January 2000), they have still not published that report. (The web site only says, "7.Depression_New Pharmacotherapies: Summary." When -- and if -- it's published, it will have another link that says, "Evidence Report," like the other evidence reports.) Doctors who are smarter than me have not been able to review it. I don't need a doctorate to know something's wrong.
According to Mulrow, I was the only reporter who was writing more than a next-day story about that report. Most psychiatrists couldn't get that report. I had it. It was my heavy responsibility to figure it out as best as I could, and to give them the best report I could on whether this government study meant that they could now prescribe the older, cheaper, antidepressive drugs, as they read in the New York Times.
My best understanding is this: This report doesn't prove that the older and newer drugs are equivalent -- Mulrow couldn't give me the confidence intervals to demonstrate that. It only proves that they couldn't find any difference between the older and newer drugs. "No proof of difference" is not the same as "proof of no difference."
As Kupfer pointed out, psychiatric trials have small Ns. If you set up a bad enough study to compare 2 drugs -- if for example the Ns are too small, or the duration of treatment is too brief -- you won't find any statistically significant differences between them. This report doesn't compare older and newer drugs head-to-head. It merely compiles data for older drugs, and separately compiles data for newer drugs, and concludes that they can't find any difference between them.
The AHCPR is turning out outcome reports on an assembly line, like an HMO doctor scheduling patients. Under the banner of evidence-based medicine, the managed care companies will use reports like this to overrule the clinical judgment of practicing physicians. If you're playing the game of evidence-based medicine, you have to play by the rules. The rules say that the AHCPR has to publish its study. The rules say that the AHCPR has the burden of proof. The rules say that the best way to compare 2 drugs is with randomized, prospective, head-to-head studies.
And if the new rules are consumer choice in a free market, then the AHCPR has to explain to consumers why we should believe that the old drugs are as good as the new ones.
The AHCPR didn't meet that burden.
Late note: After I wrote this story, I found the following study, which confirms my suspicions about side effects: "Excess risk of myocardial infarction in patients treated with antidepressant medications: association with use of tricyclic agents," Hillel W. Cohen et al., Am J Med (January 2000) 108(1);2-8
The Cochrane Review on SSRIs vs. other antidepressants for depressive disorder confirms that the effectiveness is equivalent.
And here's a report that found St. John's wort equivalent to imipramine: "Comparison of St John's wort and imipramine for treating depression: randomised controlled trial." BMJ (2 Sep 2000) 321:536-539