Now a “scientific” study is claiming that these FDA warning labels on antidepressants caused a rise in suicide attempts ignoring the vast amount of data showing that it is the taking of antidepressants that leads to thoughts of suicide, suicide attempts and suicides accomplished.
It’s interesting to trace how this bad scientific study got great press coverage to promote this idea.
The study is entitled “Changes in antidepressant use by young people and suicidal behavior after FDA warnings and media coverage: quasi-experimental study”.
It was headed by a researcher at Harvard Medical School’s Department of Population Medicine and the Harvard Pilgrim Health Care Institute. It was published on June 18th 2014 in The BMJ (formerly the British Medical Journal), a widely respected international peer reviewed medical journal that’s been around since 1840.
The investigators had access to health care records for some 10 million people in 12 states in the US. It attempted to examine if the FDA warnings affected the number of antidepressants given out and the number of suicides during the years following the warnings.
The study decided that antidepressant use had gone down and suicides up – both incorrect statements according to other peer scientists who criticized the study techniques and findings.
The study even admitted to some research “limitations”.
Suicide attempts are identified by what are called external cause of injury codes (E-codes).
Because these suicide attempt E-codes were often not noted across the commercial health records of the 12 states, the team decided to substitute “poisoning by psychotropic agents” as a proxy to measure suicide attempts!
They admittedly were guessing that it would represent suicide attempts.
Yet the media immediately jumped on the bandwagon with articles like these:
• “Study finds rise in suicide attempts after FDA warning” – The Boston Globe – June 18th 2014
• “As Antidepressant Warnings Toughened, Teen Suicide Attempts Rose: Study” – Publishedin both HealthDay and Medline Plus (an online service of the National Institutes of Health) – June 19th, 2014
• “Black Box’ Warning on Antidepressants Raised Suicide Attempts” – NBC News – June 18th, 2014
• “Report: Government warnings about antidepressants may have led to more suicide attempts” –The Washington Post– June 18th, 2014
• “Teen Suicide Attempts Rise as Warning Cuts Medicine Use” – BloombergNews – June 18th, 2014
• “Warnings on antidepressants may have backfired” – USAToday – June 18th, 2014
• “Study: FDA Warnings About Antidepressant Side Effects May Have Caused Spike In Suicide Attempts” – CBS Philadelphia– June 19th, 2014
• “Antidepressant warnings tied to suicide attempts in youths”- ReutersUK– June 18th, 2014
Meanwhile, at TheBMJ website this study immediately drew harsh criticism from peers in the scientific community – viewpoints that never made it to the news stories of the popular press and psychiatric mouthpieces but have remained buried in the comment section at TheBMJ.
These quotes from the peer responses to the study give the real story:
• “Most importantly, we doubt psychotropic poisonings (ICD-9 code 969) are a “validated proxy for suicide attempts.” Most suicide attempts in young people do not involve poisoning by psychotropic drugs; and most poisoning events with psychotropic drugs, which include caffeine, stimulants, benzodiazepines, hallucinogens, and several other agents, are not suicide attempts” …
“The large “declines” the authors report are relative to forward projection of trends prior to the FDA warning…In absolute terms, however, the authors find only a modest decrease in the fraction of children and adolescents receiving antidepressants,…Yet the hypothesis of increases in suicidality is premised on absolute declines in antidepressant use, not declines relative to a hypothetical projection.”
Mark Olfson -Professor of Psychiatry- Columbia University, New York, NY
Michael Schoenbaum -Ph.D. (National Institute of Mental Health, Bethesda, MD)
“This study is fundamentally flawed:
Visit the Harvard Center For Placebo Studies. There, you will find conclusive evidence that not a single anti-depressant has any efficacy above that provided by placebo. ..
The analysis has probably reached the wrong conclusion. The study presents a statistical association and proposes a causal relationship with no proof and three key assumptions that anti-depressants have efficacy, increases in overdose were due solely to suicide attempts and the two were somehow linked. The study does not demonstrate causality and the preponderance of secondary evidence would suggest there is no causality at all in the way implied.”
Gerald W Gaines
mental health clinic owner
Depression Recovery Centers
• “The study is not reliable
The FDA’s large meta-analysis of 100,000 patients who had participated in placebo-controlled randomised trials found that antidepressants increase suicidal behaviour up till about the age of 40 (2), and in young people, the risk was doubled.
It is therefore a highly convincing finding that antidepressants increase the risk of suicide in young people, and randomised trials are far more reliable than the before-after analysis that Lu et al. presented, which seemed to find the opposite result. There must therefore be major problems with their research, and indeed there are.
The authors didn’t study their primary outcome, suicide attempts on SSRIs, but used a proxy, which was “poisoning by psychotropic agents” (ICD-9 code 969). This is a very poor surrogate. It covers all psychotropic agents, not only SSRIs, and people on SSRIs who attempt to commit suicide don’t usually poison themselves (and cannot really do it with SSRIs). Suicides on SSRIs are often violent, e.g. death by hanging, gun shots or caused by vehicles. Further, many suicide attempts and much suicidal behaviour don’t lead to hospital admission and might therefore not be detected and coded.
Most importantly, the authors don’t seem to know that any dose change with SSRIs increase the risk of suicide. Treatment with SSRIs leads to dependency in many people (3,4) and it is well-known that withdrawal from SSRIs can lead to terrible symptoms, which increase the risk of suicide. Thus, if people suddenly stop taking SSRIs because of the FDA’s warnings, suicide attempts might increase, but this is likely due to the withdrawal symptoms, not certainly because SSRIs protect against suicide in young people, which they don’t (3).
The authors’ seem to be biased in favour of antidepressants. For example, they conclude that “Undertreated mood disorders can have severe negative consequences.” As they focus on young people, it would have been more correct to say that treatment of mood disorders can have severe negative consequences, which is what the FDA analysis showed. … Worst of all, the FDA had informed the companies that it wouldn’t check their reports of suicidality, despite the fact that the FDA was aware that several companies had manipulated the suicidality data they had submitted to the FDA earlier (3)!
The findings in the report by Lu et al. should be ignored. SSRIs don’t decrease suicidal behaviour in young people, as they claim. SRRIs increase it, and it seems that the risk increases with dose, as would be expected (5).”
Peter C Gøtzsche
Nordic Cochrane Centre
• “You can’t assess the reduction in efficacy of not taking a drug when there is no data supporting evidence of efficacy when taking it.”
David Braley Nancy Gordon Chair in Family Medicine
Longwood Road South, Ontario, Canada.
• “Among the young adults (Figure 2) there was no change in antidepressant drug prescribing – just a flat trajectory. Plus, there was no change in sample size, so the rise in psychotropic drug poisonings was unrelated to change in antidepressant drug use – absolute or per capita. Something else was going on. Meth, anybody? Oxycodone, anybody? These authors don’t have a clue. This points up how inadequate is their proxy measure of suicide attempts. …
There was no change in the hard outcome of completed suicide. The authors tried to finesse this finding with the statement that completed suicide is a rare event. Well, it’s not that rare, especially when you trumpet that you have a sample size in the millions.
I won’t even bother to critique the special pleading and the tendentious tone of this report. The decision to publish it was not the BMJ’s finest hour.”
Tendentious — expressing or intending to promote a particular cause or point of view, especially a controversial one
Bernard J Carroll
Pacific Behavioral Research Foundation
Carmel, California 93923, USA
• “First this paper suggests that FDA warnings led to a lower than expected increase in antidepressant usage. This is terribly unlikely. FDA warnings have rarely had effects like this. In fact companies have exploited warnings to increase sales – as for instance warnings about antidepressants and the risk of birth defects.
It is more likely that the relative decline in usage is linked to these drugs going off patent. At the time there was a very active marketing of mood-stabilizers for bipolar disorder with the message if your patient has become suicidal on an Antidepressant this is because they really have bipolar disorder and should have a mood stabilizer.
FDA’s resources to influence prescribing pale compared to those of companies.
….Fifth, there is no apparent openness in the paper to the possibility of antidepressant withdrawal related suicidal acts. In the unlikely event that FDA warnings were effective, some people may have come to grief because of withdrawal suicidality – but this is the fault of the medication rather than the warnings.
Finally if there has been a true increase in suicide attempts, this parallels an increase in the use of Mood Stablilizers, the clinical trials of which show almost exactly the same increases in suicidal act rates over placebo as is shown for the antidepressants.
There is in summary so little basis in the data presented here for the argument being made that this paper perhaps offers better evidence of an agenda than anything else.”
David T Healy
Professor of Psychiatry
Hergest Unit Wales LL57 2PW
• “Many issues regarding the specific methods of this study have been described in other responses. My comment focuses narrowly on the authors’ claim that “Treating depression in young people with antidepressants can improve mood.” The foundation of the current study is shaky if antidepressants do not have evidence of improving mood – or other important mental health outcomes – in children and adolescents.
To support their claim of antidepressant efficacy, the authors quite selectively cite two publications stemming from a single controlled trial. This raises four related issues.
The majority of clinical trials find no efficacy for antidepressants versus placebo in treating youth depression.
2. A more comprehensive meta-analytic review of relevant clinical trials found an overall standardized mean difference effect size on clinician-rated depression measures of .20 (1), which is clinically insignificant.
3. There is zero evidence that antidepressants in depressed children and adolescents improve well-being relative to placebo. I say this as the lead author of a recent meta-analysis which found no statistically significant benefit across the small number of trials which reported such outcomes (measures of quality of life, global mental health, self-esteem, and autonomy) (2).
4. On depression self-reports, children and adolescents report no more benefit on antidepressants than on placebo (2).
The other reference cited to support their claim about antidepressant efficacy was a pooled analysis which narrowly focused on selected trials of fluoxetine (3) – excluding trials of many other antidepressants which failed to show efficacy. Severe problems with this pooled analysis were pointed out in multiple letters to the editor (4, 5) as well as in online comments hidden behind the journal’s paywall.
Why would the authors – or anyone else – expect regulatory warnings to cause negative outcomes in the context of drugs which have no clear benefit for depressed youth?”
Glen I Spielmans
Associate Professor of Psychology
Metropolitan State University
St. Paul MN, USA 55108
• “Several obvious, serious flaws to this study were adeptly exposed by previous posters. I contend this “study” and the BMJ’s publication of it may very well promote violations of the Hippocratic Oath.
…the consequences of pretending that SSRIs have been proven to be safe and effective treatment for childhood depression—a fairy tale the BMJ and the study’s authors chose to believe—are far more serious than professional embarrassment.
This study’s flawed premise, measurements and far-reaching conclusions should have been reason enough to avoid publication. Given that these problems did not prevent publication, the BMJ and authors should take credit for some of the possible outcomes publication might create. These include:
Misleading the public and doctors into believing SSRI’s are safe and effective treatment for childhood depression.
Implying without reliable data that the Black Box warnings actually increased the suicide rate among children and teens and that the warnings were an “overreaction”.
Encouraging doctors to downplay and/or fail to communicate with patients and their families the SSRI Black Box warnings and serious adverse side effects SSRI’s can and do pose.
Presuming that there is no specific profile to predict which children might develop SSRI-induced akathisia, psychosis and suicidality, why would anyone in good faith encourage doctors, patients, families and caregivers to downplay the increased suicidality risks all SSRIs pose? Publishing and promoting this study might very well cause more suffering, torture and ego-dystonic “suicides” experienced by some children as a result of future SSRI prescriptions.
Perhaps the study’s authors and the BMJ might publish future research that directly analyzes first-person data as reported by the patients and their families? Sadly, some of these children have suffered and died as a result of the SSRIs prescribed. But some children left diaries and notes… children with no professional agenda, no ulterior motives, often speak the plain truth.
I was fortunate to have previously read all responses prior to the BMJ’s unusual online data loss. It is to be hoped that the BMJ will recover and repost all lost postings. In lieu of such, the BMJ should ethically seek a call for responses and compile and publish these responses as a separate article in the next issue.”
Kristina K. Gehrki
10818 Fieldwood Drive, Fairfax, Virginia, 22030 USA
• “Attempts did not increase. Lu et al’s opposite finding probably has more to do with the unusual proxy they used (one they said was validated by a paper that two of us—MM and CB—co-authored) than with an actual change in suicidal behavior among youth.”
(They go on to summarize five readily available, online data sources that give direct and valid measures of youth suicidal behavior)
“Lu’s study findings are roundly unsupported by national data. On balance, the evidence shows no increase in suicidal behavior among young people following the drop in antidepressant prescribing. It is important that we get this right because the safety of young people is at stake. Lu et al’s paper sounding the alarm that attempts increased was extensively covered in the media. Their advice that the media should be more circumspect when covering dire warnings about antidepressant prescribing applies as well to their own paper.”
Catherine W Barber
Matthew Miller, Deborah Azrael (Harvard School of Public Health)
Harvard School of Public Health
Boston MA 02115
• “Time for a retraction?
The BMJ has some hard thinking to do here. A substandard article with large policy implications slipped through their review and editing process and it was trumpeted in the world media. The Rapid Responses pointed up the weak tradecraft of the Lu report, and the coup de grace was delivered by this Rapid Response comment from Barber, Miller and Azriel.
The calculus for the BMJ is to decide whether the article should be retracted or whether on-line publication of the critical Rapid Responses is a sufficient disavowal of the Lu report.
Certainly, a retraction would shine a stronger public searchlight on the compromised validity of the Lu report than just the Rapid Responses can do.
In a way, the issue is like that of declaring conflicts of interest. Simply declaring a compromise through stating competing interests does not remove the compromise. Likewise, simply publishing critical responses does not remove the compromise from the journal or from the original authors.”
Bernard J Carroll
Pacific Behavioral Research Foundation
Carmel, California 93923 USA