8

An Episodic Illness Turns Chronic

“With the range of available treatments for

depression, one might wonder why depression-related

disability is on the rise.”

—CAROLYN DEWA,

CENTRE FOR ADDICTION AND MENTAL HEALTH,

ONTARIO (2001)1

M-Power in Boston is a peer-run advocacy group for the mentally ill, and while I was at one of their meetings in April 2008, a young, quiet woman came up to me and whispered, “I’d be willing to talk to you.” Red hair fell about her shoulders, and she seemed so shy as to almost be frightened. Yet when Melissa Sances told me her story a few days later, she spoke in the most candid manner possible, her shyness transformed into an introspective honesty so intense that when she was recounting her struggles growing up in Sandwich on Cape Cod, she suddenly stopped and said: “I was unhappy, but I didn’t have an awareness that I was depressed.” It was important that I understood the difference between those two emotions.

Her unhappiness as a child was comprised of familiar ingredients. She felt socially awkward and “different” from other kids at school, and after her parents divorced when she was eight, she and her brothers lived with their mother, who struggled with depression. In middle school, Melissa began to come out of her shell, making friends and feeling “more normal,” only then she ran head-on into the torments of puberty. “When I was fourteen, I was overweight, I had acne. I felt like a social outcast, and the kids at high school were very cruel. I was called a freak and ugly. I would sit at my desk with my head down, and my hair pulled over my face, trying to hide from the world. Every day I woke up feeling like I wanted to die.”

Today, Melissa is an attractive woman, and so it is a bit surprising to learn of this ugly-duckling moment from her past. But with her schoolmates taunting her, her childhood unhappiness metamorphosed into a deep depression, and when she was sixteen, she tried to commit suicide by gulping down handfuls of Benadryl and Valium. She woke up in the hospital, where she was told that she had a mental illness and was prescribed an antidepressant. “The psychiatrist tells me that it adjusts serotonin levels, and that I will probably have to be on it for the rest of my life. I cried when I heard that.”

For a time, Zoloft worked great. “I was like a new person,” Melissa recalls. “I became open to people, and I made a lot of friends. I was the pitcher on the softball team.” During her senior year, she began making plans to attend Emerson College in Boston, thinking that she would study creative writing. Only then, slowly but surely, Zoloft’s magic started to fade. Melissa began to take higher doses to keep her depression at bay, and eventually her psychiatrist switched her to a very high dose of Paxil, which left her feeling like a zombie. “I was out of it. During a softball game, someone hit a ground ball to me and I just held the ball. I didn’t know what to do with it. I told my team I was sorry.”

Melissa has struggled with depression ever since. It followed her to college, first to Emerson and then to UMass Dartmouth, and although it did lift somewhat when she became immersed in writing for the UMass newspaper, it never entirely went away. She tried this drug and that drug, but none brought any lasting relief. After graduating, she found a job as an editorial assistant at a magazine, but depression caught up with her there, too, and in late 2007, the government deemed her eligible to receive SSDI because of her illness.

“I have always been told that a person has to accept that the illness is chronic,” she says, at the end of our interview. “You can be ‘in recovery,’ but you can never be ‘recovered.’ But I don’t want to be on disability forever, and I have started to question whether depression is really a chemical thing. What are the origins of my despair? How can I really help myself? I want to honor the other parts of me, other than the sick part that I’m always thinking about. I think that depression is like a weed that I have been watering, and I want to pull up that weed, and I am starting to look to people for solutions. I really don’t know what the drugs did for me all these years, but I do know that I am disappointed in how things have turned out.”

Such is Melissa Sances’s story. Today it is a fairly common one. A distressed teenager is diagnosed with depression and put on an antidepressant, and years later he or she is still struggling with the condition. But if we return to the 1950s, we will discover that depression rarely struck someone as young as Melissa, and it rarely turned into the chronic suffering that she has experienced. Her course of illness is, for the most part, unique to our times.

The Way Depression Used to Be

Melancholy, of course, visits nearly everyone now and then. “I am a man, and that is reason enough to be miserable,” wrote the Greek poet Menander in the fourth century B.C., a sentiment that has been echoed by writers and philosophers ever since.2 In his seventeenth-century tome Anatomy of Melancholy, English physician Robert Burton advised that everyone “feels the smart of it … it is most absurd and ridiculous for any mortal man to look for a perpetual tenure of happiness in this life.” It was only when such gloomy states became a “habit,” Burton said, that they became a “dis-ease.”3

This was the same distinction that Hippocrates had made more than two thousand years earlier, when he identified persistent melancholy as an illness, attributing it to an excess of black bile (melaina chole in Greek). Symptoms included “sadness, anxiety, moral dejection, [and] tendency to suicide” accompanied by “prolonged fear.” To curb the excess of black bile and bring the four humors of the body back into balance, Hippocrates recommended the administration of mandrake and hellebore, changes in diet, and the use of cathartic and emetic herbs.4

During the Middle Ages, the deeply melancholic person was seen as possessed by demons. Priests and exorcists would be called upon to drive out the devils. With the arrival of the Renaissance in the fifteenth century, the teachings of the Greeks were rediscovered, and physicians once again offered medical explanations for persistent melancholy. After William Harvey discovered in 1628 that blood circulated throughout the body, many European doctors reasoned that this illness arose from a lack of blood to the brain.

Psychiatry’s modern conception of depression has its roots in Emil Kraepelin’s work. In his 1899 book, Lehrbuch der Psychiatrie, Kraepelin divided psychotic disorders into two broad categories—dementia praecox and manic-depressive psychosis. The latter category was mostly comprised of three subtypes—depressive episode only, manic episode only, and episodes of both kinds. But whereas dementia praecox patients deteriorated over time, the manic-depressive group had fairly good long-term outcomes. “Usually all morbid manifestations completely disappear; but where that is exceptionally not the case, only a rather slight, peculiar psychic weakness develops,” Kraepelin explained in a 1921 text.5

Today, Kraepelin’s depression-only group would be diagnosed with unipolar depression, and in the 1960s and early 1970s, prominent psychiatrists at academic medical centers and at the NIMH described this disorder as fairly rare and having a good long-term course. In her 1968 book, The Epidemiology of Depression, Charlotte Silverman, who directed epidemiology studies for the NIMH, noted that community surveys in the 1930s and 1940s had found that fewer than one in a thousand adults suffered an episode of clinical depression each year. Furthermore, most who were struck did not need to be hospitalized. In 1955, there were only 7,250 “first admissions” for depression in state and county mental hospitals. The total number of depressed patients in the nation’s mental hospitals that year was around 38,200, a disability rate of one in every 4,345 people.6

Depression, Silverman and others noted, was primarily an “ailment of middle aged and older persons.” In 1956, 90 percent of the first-admissions to public and private hospitals for depression were thirty-five years and older.7 Depressive episodes, explained Baltimore psychiatrist Frank Ayd Jr., in his 1962 book, Recognizing the Depressed Patient, “occur most often after age thirty, have a peak incidence between age 40 and 60, and taper off sharply thereafter.”8

Although the manic-depressive patients that Kraepelin studied were severely ill, as their minds were also buffeted by psychotic symptoms, their long-term outcomes were pretty good. Sixty percent of Kraepelin’s 450 “depressed-only” patients experienced but a single episode of depression, and only 13 percent had three or more episodes.9 Other investigators in the first half of the twentieth century reported similar outcomes. In 1931, Horatio Pollock, of the New York State Department of Mental Hygiene, in a long-term study of 2,700 depressed patients hospitalized from 1909 to 1920, reported that more than half of those admitted for a first episode had but a single attack, and only 17 percent had three or more episodes.10 Thomas Rennie, who investigated the fate of 142 depressives admitted to Johns Hopkins Hospital from 1913 to 1916, determined that 39 percent had “lasting recoveries” of five years or more.11 A Swedish physician, Gunnar Lundquist, followed 216 patients treated for depression for eighteen years, and he determined that 49 percent never experienced a second attack, and that another 21 percent had only one other episode. In total, 76 percent of the 216 patients became “socially healthy” and resumed their usual work. After a person has recovered from a depressive episode, Lundquist wrote, he “has the same capacity for work and prospects of getting on in life as before the onset of the disease.”12

These good outcomes spilled over into the first years of the anti-depressant era. In 1972, Samuel Guze and Eli Robins at Washington University Medical School in St. Louis reviewed the scientific literature and determined that in follow-up studies that lasted ten years, 50 percent of people hospitalized for depression had no recurrence of their illness. Only a small minority of those with unipolar depression—one in ten—became chronically ill, Guze and Robins concluded.13

That was the scientific evidence that led NIMH officials during the 1960s and 1970s to speak optimistically about the long-term course of the illness. “Depression is, on the whole, one of the psychiatric conditions with the best prognosis for eventual recovery with or without treatment. Most depressions are self-limited,” Jonathan Cole wrote in 1964.14 “In the treatment of depression,” explained Nathan Kline that same year, “one always has as an ally the fact that most depressions terminate in spontaneous remissions. This means that in many cases regardless of what one does the patient eventually will begin to get better.”15 George Winokur, a psychiatrist at Washington University, advised the public in 1969 that “assurance can be given to a patient and to his family that subsequent episodes of illness after a first mania or even a first depression will not tend toward a more chronic course.”16

Indeed, as Dean Schuyler, head of the depression section at the NIMH explained in a 1974 book, spontaneous recovery rates were so high, exceeding 50 percent within a few months, that it was difficult to “judge the efficacy of a drug, a treatment [electroshock] or psychotherapy in depressed patients.” Perhaps a drug or electro-shock could shorten the time to recovery, as spontaneous remission often took many months to happen, but it would be difficult for any treatment to improve on the natural long-term course of depression. Most depressive episodes, Schuyler explained, “will run their course and terminate with virtually complete recovery without specific intervention.”17

Short-Term Blues

The history of trials on the short-term efficacy of antidepressants is a fascinating one, for it reveals much about the capacity of a society and a medical profession to cling to a belief in the magical merits of a pill, even though clinical trials produce, for the most part, dispiriting results. The two antidepressants developed in the 1950s, iproniazid and imipramine, gave birth to two broad types of drugs for depression, known as monamine oxidase inhibitors (MAOIs) and tricyclics, and studies in the late 1950s and early 1960s found both kinds to be wonderfully effective. However, the studies were of dubious quality, and in 1965, the British Medical Council put both types through a more rigorous test. While the tricyclic (imipramine) was modestly superior to placebo, the MAOI (phenelzine) was not. Treatment with this drug was “singularly unsuccessful.”18

Four years later, the NIMH conducted a review of all anti-depressant studies, and it found that the “more stringently controlled the study, the lower the improvement rate reported for a drug.” In well-controlled studies, 61 percent of the drug-treated patients improved versus 46 percent of the placebo patients, a net benefit of only 15 percent. “The differences between the effectiveness of antidepressant drugs and placebo are not impressive,” it said.19 The NIMH then conducted its own trial of imipramine, and it was only in psychotically depressed patients that this tricyclic showed any significant benefit over a placebo. Only 40 percent of the drug-treated patients completed the seven-week study, and the reason so many dropped out was that their condition “deteriorated.” For many depressed patients, the NIMH concluded in 1970, “drugs play a minor role in influencing the clinical course of their illness.”20

The minimal efficacy of imipramine and other antidepressants led some investigators to wonder whether the placebo response was the mechanism that was helping people feel better. What the drugs did, several speculated, was amplify the placebo response, and they did so because they produced physical side effects, which helped convince patients that they were getting a “magic pill” for depression. To test this hypothesis, investigators conducted at least seven studies in which they compared a tricyclic to an “active” placebo, rather than an inert one. (An active placebo is a chemical that produces an unpleasant side effect of some kind, like dry mouth.) In six of the seven, there was no difference in outcomes.21

That was the efficacy record racked up by tricyclics in the 1970s: slightly better than inactive placebo, but no better than an active placebo. The NIMH visited this question of imipramine’s efficacy one more time in the 1980s, comparing it to two forms of psychotherapy and placebo, and found that nothing had changed. At the end of sixteen weeks, “there were no significant differences among treatments, including placebo plus clinical management, for the less severely depressed and functionally impaired patients.” Only the severely depressed patients fared better on imipramine than on a placebo.22

Societal belief in the efficacy of antidepressants was reborn with the arrival of Prozac in 1988. Eli Lilly, it seemed, had come up with a very good pill for the blues. This selective serotonin reuptake inhibitor (SSRI) was said to make people feel “better than well.” Unfortunately, once researchers began poking through the clinical trial data submitted to the FDA for Prozac and the other SSRIs that were subsequently brought to market, the “wonder drug” story fell apart.

The first blow to the SSRIs’ image came from Arif Khan at the Northwest Clinical Research Center in Washington. He reviewed the study data submitted to the FDA for seven SSRIs and concluded that symptoms were reduced 42 percent in patients treated with tricyclics, 41 percent in the SSRI group, and 31 percent in those given a placebo.23 The new drugs, it turned out, were no more effective than the old ones. Next, Erick Turner from Oregon Health and Science University, in a review of FDA data for twelve antidepressants approved between 1987 and 2004, determined that thirty-six of the seventy-four trials had failed to show any statistical benefit for the antidepressants. There were just as many trials that had produced negative or “questionable” results as positive ones.24 Finally, in 2008, Irving Kirsch, a psychologist at the University of Hull in the United Kingdom, found that in the trials of Prozac, Effexor, Serzone, and Paxil, symptoms in the medicated patients dropped 9.6 points on the Hamilton Rating Scale of Depression, versus 7.8 points for the placebo group. This was a difference of only 1.8 points, and the National Institute for Clinical Excellence in Britain had previously determined that a three-point drug-placebo difference was needed on the Hamilton scale to demonstrate a “clinically significant benefit.” It was only in a small subgroup of patients—those most severely depressed—that the drugs had been shown to be of real use. “Given these data, there seems little evidence to support the prescription of antidepressant medication to any but the most severely depressed patients, unless alternative treatments have failed to provide benefit,” Kirsch and his collaborators concluded.25

All of this provoked some soul-searching by psychiatrists in their journals. Randomized clinical trials, admitted a 2009 editorial in the British Journal of Psychiatry, had generated “limited valid evidence” for use of the drugs.26 A group of European psychiatrists affiliated with the World Health Organization conducted their own review of Paxil’s clinical data and concluded that “among adults with moderate to severe major depression,” this popular SSRI “was not superior to placebo in terms of overall treatment effectiveness and acceptability.”27 Belief in these medications’ effectiveness, wrote Greek psychiatrist John Ioannidis, who has an appointment at Tufts University School of Medicine in Massachusetts, was a “living myth.” A review of the SSRI clinical data had led to a depressing end for psychiatry, and, as Ioannidis quipped, he and his colleagues couldn’t even now turn to Prozac and the other SSRIs for relief from this dispiriting news because, alas, “they probably won’t work.”28

There is one other interesting addendum to this research history. In the late 1980s, many Germans who were depressed turned to Hypericum perforatum, the plant known as Saint-John’s-wort, for relief. German investigators began conducting double-blind trials of this herbal remedy, and in 1996, the British Medical Journal summarized the evidence: In thirteen placebo-controlled trials, 55 percent of the patients treated with Saint-John’s-wort significantly improved, compared with 22 percent of those given a placebo. The herbal remedy also bested antidepressants in head-to-head competition: In those trials, 66 percent given the herb improved compared to 55 percent of the drug-treated patients. In Germany, Saint-John’s-wort was effective. But would it work similar magic in Americans? In 2001, psychiatrists at eleven medical centers in the United States reported that it wasn’t effective at all. Only 15 percent of the depressed outpatients treated with the herb improved in their eight-week trial. Yet—and this was the curious part—only 5 percent of the placebo patients got better in this study, far below the usual placebo response. American psychiatrists, it seemed, were not eager to see anyone as having gotten better, lest the herb prove effective. But then the NIH funded a second trial of Saint-John’s-wort that had a design that complicated matters for any researcher who wanted to play favorites. It compared Saint-John’s-wort to both Zoloft and a placebo. Since the herb causes side effects, such as dry mouth, it would act at the very least as an active placebo. As such, this truly was a blinded trial, the psychiatrists unable to rely on side effects as a clue to which patients were getting what, and here were the results: Twenty-four percent of the patients treated with Saint-John’s-wort had a “full response,” 25 percent of the Zoloft patients, and 32 percent of the placebo group. “This study fails to support the efficacy of H perforatum in moderately severe depression,” the investigators concluded, glossing over the fact that their drug had failed this test too.29

The Chronicity Factor, Yet Again

The antidepressants’ relative lack of short-term efficacy was not, by itself, a reason to think that the drugs were causing harm. After all, most of those treated with antidepressants were seeing their symptoms abate. Medicated patients in the short-term trials were getting better. The problem was that they were not improving significantly more than those treated with a placebo. However, during the 1960s, several European psychiatrists reported that the long-term course of depression in their drug-treated patients seemed to be worsening.

Exposure to antidepressants, wrote German physician H. P. Hoheisel in 1966, appeared to be “shortening the intervals” between depressive episodes in his patients. These drugs, wrote a Yugoslavian doctor four years later, were causing a “chronification” of the disease. The tricyclics, agreed Bulgarian psychiatrist Nikola Schipkowensky in 1970, were inducing a “change to a more chronic course.” The problem, it seemed, was that many people treated with antidepressants were only “partially cured.”30 Their symptoms didn’t entirely remit, and then, when they stopped taking the anti-depressant, their depression regularly got much worse again.

With this concern having surfaced in a few European journals, a Dutch physician, J. D. Van Scheyen, examined the case histories of ninety-four depressed patients. Some had taken an antidepressant and some had not, and when Van Scheyen looked at how the two groups had fared over a five-year period, the difference was startling: “It was evident, particularly in the female patients, that more systematic long-term antidepressant medication, with or without ECT [electroconvulsive therapy], exerts a paradoxical effect on the recurrent nature of the vital depression. In other words, this therapeutic approach was associated with an increase in recurrent rate and a decrease in cycle duration…. Should [this increase] be regarded as an untoward long-term side effect of treatment with tricyclic antidepressants?”31

Over the next twenty years, investigators reported again and again that people treated with an antidepressant were very likely to relapse once they stopped taking the drug. In 1973, investigators in Britain wrote that 50 percent of drug-withdrawn patients relapsed within six months;32 a few years later, investigators at the University of Pennsylvania announced that 69 percent of patients withdrawn from antidepressants relapsed within this time period. There was, they confessed, “rapid clinical deterioration in most of the patients.”33 In 1984, Robert Prien at the NIMH reported that 71 percent of depressed patients relapsed within eighteen months of drug withdrawal.34 Finally, in 1990, the NIMH added to this gloomy picture when it reported the long-term results from its study that had compared imipramine to two forms of psychotherapy and to a placebo. At the end of eighteen months, the stay-well rate was best for the cognitive therapy group (30 percent) and lowest for the imipramine-exposed group (19 percent).35

Everywhere, the message was the same: Depressed people who were treated with an antidepressant and then stopped taking it regularly got sick again. In 1997, Ross Baldessarini from Harvard Medical School, in a meta-analysis of the literature, quantified the relapse risk: Fifty percent of drug-withdrawn patients relapsed within fourteen months.36 Baldessarini also found that the longer a person was on an antidepressant, the greater the relapse rate following drug withdrawal. It was as though a person treated with the drug gradually became less and less able, in a physiological sense, to do without it. Investigators in Britain came to the same sobering realization: “After stopping an antidepressant, symptoms tend to build up gradually and become chronic.”37

Do All Psychotropics Work This Way?

Although a handful of European physicians may have sounded the alarm about the changing course of depression in the late 1960s and early 1970s, it wasn’t until 1994 that an Italian psychiatrist, Giovanni Fava, from the University of Bologna, pointedly announced that it was time for psychiatry to confront this issue. Neuroleptics had been found to be quite problematic over the long term, the benzodiazepines had, too, and now it looked like the antidepressants were producing a similar long-term record. In a 1994 editorial in Psychotherapy and Psychosomatics, Fava wrote:

Within the field of psychopharmacology, practitioners have been cautious, if not fearful, of opening a debate on whether the treatment is more damaging [than helpful]…. I wonder if the time has come for debating and initiating research into the likelihood that psychotropic drugs actually worsen, at least in some cases, the progression of the illness which they are supposed to treat.38

In this editorial and several more articles that followed, Fava offered a biological explanation for what was going on with the anti-depressants. Like antipsychotics and benzodiazepines, these drugs perturb neurotransmitter systems in the brain. This leads to compensatory “processes that oppose the initial acute effects of a drug…. When drug treatment ends, these processes may operate unopposed, resulting in appearance of withdrawal symptoms and increased vulnerability to relapse,” he wrote.39 Moreover, Fava noted, pointing to Baldessarini’s findings, it was evident that the longer one stayed on antidepressants, the worse the problem. “Whether one treats a depressed patient for three months, or three years, it does not matter when one stops the drugs. A statistical trend suggested that the longer the drug treatment, the higher the likelihood of relapse.”40

But, Fava also wondered, what was the outcome for people who stayed on antidepressants indefinitely? Weren’t they also relapsing with great frequency? Perhaps the drugs cause “irreversible receptor modifications,” Fava said, and, as such, “sensitize” the brain to depression. This could explain the “bleak long term outcome of depression.” He summed up the problem in this way:

Antidepressant drugs in depression might be beneficial in the short term, but worsen the progression of the disease in the long term, by increasing the biochemical vulnerability to depression…. Use of antidepressant drugs may propel the illness to a more malignant and treatment unresponsive course.41

This possibility was now front and center in psychiatry. “His question and the several related matters … are not pleasant to contemplate and may seem paradoxical, but they now require open-minded and serious clinical and research consideration,” Baldessarini said.42 Three physicians from the University of Louisville School of Medicine echoed the sentiment. “Long-term antidepressant use may be depressogenic,” they wrote, in a 1998 letter to the Journal of Clinical Psychiatry. “It is possible that antidepressant agents modify the hardwiring of neuronal synapses [which] not only render anti-depressants ineffective but also induce a resident, refractory depressive state.”43

It’s the Disease, Not the Drug

Once again, psychiatry had reached a moment of crisis. The specter of supersensitivity psychosis had stirred up a hornets’ nest in the early 1980s, and now, in the mid-1990s, a concern very similar in kind had appeared. This time, the stakes were perhaps even higher. Fava was raising this issue even as U.S. sales of SSRIs were soaring. Prominent psychiatrists at the best medical schools in the United States had told newspaper and magazine reporters of their wonders. These drugs were now being prescribed to an ever-larger group of people, including to more than a million American children. Could the field now confess that these medications might be making people chronically depressed? That they led to a “malignant” long-term course? That they caused biological changes in the brain that “sensitized” a person to depression? And if that were so, how could they possibly be prescribed to young children and teenagers? Why would doctors do that to children? This concern of Fava’s needed to be hushed up, and hushed up fast. Early in 1994, after Fava first broached the subject, Donald Klein from Columbia University told Psychiatric News that this subject was not going to be investigated.

“The industry is not interested [in this question], the NIMH is not interested, and the FDA is not interested,” he said. “Nobody is interested.”44

Indeed, by this time, leaders of American psychiatry were already coming up with an alternative explanation for the “bleak” long-term outcomes, one that spared their drugs any blame. The old epidemiological studies from the pre-antidepressant era, which had shown that people regularly recovered from a severe depressive episode and that a majority then stayed well, were “flawed.” A panel of experts convened by the NIMH put it this way: “Improved approaches to the description and classification of [mood] disorders and new epidemiologic studies [have] demonstrated the recurrent and chronic nature of these illnesses, and the extent to which they represent a continual source of distress and dysfunction for affected individuals.”45 Depression was at last being understood, that was the story that psychiatry embraced, and textbooks were rewritten to tell of this advance in knowledge. Not long ago, noted the 1999 edition of the American Psychiatric Association’s Textbook of Psychiatry, it was believed that “most patients would eventually recover from a major depressive episode. However, more extensive studies have disproved this assumption.”46It was now known, the APA said, that “depression is a highly recurrent and pernicious disorder.”

Depression, it seemed, had never been the relatively benign illness described by Silverman and others at the NIMH in the late 1960s and early 1970s. And with depression reconceived in this way, as a chronic illness, psychiatry now had a rationale for long-term use of antidepressants. The problem wasn’t that exposure to an antidepressant caused a biological change that made people more vulnerable to depression; the problem was that once the drug was withdrawn, the disease returned. Moreover, psychiatry did have studies proving the merits of keeping people on antidepressants. After all, relapse rates were higher for patients withdrawn from the medications than for those maintained on the drugs. “Antidepressants reduce the risk of relapse in depressive disorder, and continued treatment with antidepressants would benefit many patients with recurrent depressive disorder,” explained a group of psychiatrists who reviewed this literature.47

During the 1990s, psychiatrists in the United States and elsewhere fleshed out the spectrum of outcomes achieved with this new paradigm of care, which emphasized “maintaining” people on the medications. One-third of all unipolar patients, researchers concluded, are “non-responders” to antidepressants. Their symptoms do not abate over the short term, and this group is said to have a poor long-term outcome. Another third of unipolar patients are “partial responders” to antidepressants, and in short-term trials, they show up as being helped by the drugs. The problem, NIMH investigators discovered, in a long-term study called the Collaborative Program on the Psychobiology of Depression, was that these drug-maintained patients fared poorly over the long term. “Resolution of major depressive episode with residual subthreshold depressive symptoms, even the first lifetime episode, appears to be the first step of a more severe, relapsing, and chronic future course,” explained Lewis Judd, a former director of the NIMH, in a 2000 report.48 The final third of patients see their symptoms remit over the short term, but only about half of this group, when maintained on an antidepressant, stay well for long periods of time.49

In short, two-thirds of patients initially treated with an antidepressant can expect to have recurrent bouts of depression, and only a small percentage of people can be expected to recover and stay well. “Only 15% of people with unipolar depression experience a single bout of the illness,” the APA’s 1999 textbook noted, and for the remaining 85 percent, with each new episode, remissions become “less complete and new recurrences develop with less provocation.”50 This outcomes data definitely told of a pernicious disorder, but then John Rush, a prominent psychiatrist at Texas Southwestern Medical Center in Dallas, suggested that “real-world outcomes” were even worse. Those outcome statistics arose from clinical trials that had cherry-picked patients most likely to respond well to an antidepressant, he said. “Longer-term clinical outcomes of representative outpatients with nonpsychotic major depressive disorder treated in daily practice in either the private or public sectors are yet to be well defined.”51

In 2004, Rush and his colleagues filled in this gap in the medical literature. They treated 118 “real world” patients with antidepressants and provided them with a wealth of emotional and clinical support “specifically designed to maximize clinical outcomes.” This was the best care that modern psychiatry could provide, and here were their real-world results: Only 26 percent of the patients even responded to the antidepressant (meaning that their symptoms decreased at least 50 percent on a rating scale), and only about half of those who responded stayed better for any length of time. Most startling of all, only 6 percent of the patients saw their depression fully remit and stay away during the yearlong trial. These “findings reveal remarkably low response and remission rates,” Rush said.52

This dismal picture of real-world outcomes was soon confirmed by a large NIMH study known as the STAR*D trial, which Rush helped direct. Most of the 4,041 real-world outpatients enrolled in the trial were only moderately ill, and yet fewer than 20 percent remitted and stayed well for a year. “Most individuals with major depressive disorders have a chronic course, often with considerable symptomatology and disability even between episodes,” the investigators concluded.53

In the short span of forty years, depression had been utterly transformed. Prior to the arrival of the drugs, it had been a fairly rare disorder, and outcomes generally were good. Patients and their families could be reassured that it was unlikely that the emotional problem would turn chronic. It just took time—six to twelve months or so—for the patient to recover. Today, the NIMH informs the public that depressive disorders afflict one in ten Americans every year, that depression is “appearing earlier in life” than it did in the past, and that the long-term outlook for those it strikes is glum. “An episode of major depression may occur only once in a person’s lifetime, but more often, it recurs throughout a person’s life,” the NIMH warns.54

Unmedicated v. Medicated Depression

We’ve now arrived at an intellectual place similar to what we experienced with the antipsychotics: Can it really be that antidepressants, which are so popular with the public, worsen long-term outcomes? All of the data we’ve reviewed so far indicates that the drugs do just that, but there is one piece of evidence that we are still missing: What does unmedicated depression look like today? Does it run a better long-term course? Unfortunately, as researchers from the University of Ottawa discovered in 2008, there aren’t good-quality randomized trials comparing long-term outcomes in antidepressant-treated and never-medicated patients. As such, they concluded, randomized trials “provide no guidance for longer treatment.”55 However, we can search for “naturalistic” studies that might help us answer this question.*

Researchers in the UK, the Netherlands, and Canada investigated this question by looking back at case histories of depressed patients whose medication use had been tracked. In a 1997 study of outcomes at a large inner-city facility, British scientists reported that ninety-five never-medicated patients saw their symptoms decrease by 62 percent in six months, whereas the fifty-three drug-treated patients experienced only a 33 percent reduction in symptoms. The medicated patients, they concluded, “continued to have depressive symptoms throughout the six months.”56Dutch investigators, in a retrospective study of the ten-year outcomes of 222 people who had suffered a first episode of depression, found that 76 percent of those not treated with an antidepressant recovered and never relapsed, compared to 50 percent of those prescribed an antidepressant.57Finally, Scott Patten, from the University of Calgary, plumbed a large Canadian health database to assess the five-year outcomes of 9,508 depressed patients, and he determined that the medicated patients were depressed on average nineteen weeks each year, versus eleven weeks for those not taking the drugs. These findings, Patten wrote, were consistent with Giovanni Fava’s hypothesis that “anti-depressant treatment may lead to a deterioration in the long-term course of mood disorders.”58

A study conducted by the World Health Organization in fifteen cities around the world to assess the value of screening for depression led to similar results. The researchers looked for depression in patients who showed up at health clinics for other complaints, and then, in a fly-on-the-wall manner, followed those they had identified as depressed for the next twelve months. They reasoned that the general practitioners in the clinics would detect depression in some of the patients but not all, and hypothesized that outcomes would fall into four groups: those diagnosed and treated with antidepressants would fare the best, those diagnosed and treated with benzodiazepines would fare the second best, those diagnosed and treated without psychotropics the third best, and those undetected and untreated the worst. Alas, the results were the opposite. Altogether, the WHO investigators identified 740 people as depressed, and it was the 484 who weren’t exposed to psychotropic medications (whether diagnosed or not) that had the best outcomes. They enjoyed much better “general health” at the end of one year, their depressive symptoms were much milder, and a lower percentage were judged to still be “mentally ill.” The group that suffered most from “continued depression” were the patients treated with an antidepressant. The “study does not support the view that failure to recognize depression has serious adverse consequences,” the investigators wrote.59

Next, researchers in Canada and the United States studied whether antidepressant use affected disability rates. In Canada, Carolyn Dewa and her colleagues at the Centre for Addiction and Mental Health in Ontario identified 1,281 people who went on short-term disability between 1996 and 1998 because they missed ten consecutive days of work due to depression. The 564 people who subsequently didn’t fill a prescription for an antidepressant returned to work, on average, in 77 days, while the medicated group took 105 days to get back on the job. More important, only 9 percent of the unmedicated group went on to long-term disability, compared to 19 percent of those who took an antidepressant.* “Does the lack of antidepressant use reflect a resistance to adopting a sick role and consequently a more rapid return to work?” Dewa wondered.60 In a similar vein, University of Iowa psychiatrist William Coryell and his NIMH-funded colleagues studied the six-year “naturalistic” outcomes of 547 people who suffered a bout of depression, and they found those who were treated for the illness were three times more likely than the untreated group to suffer a “cessation” of their “principal social role” and nearly seven times more likely to become “incapacitated.” Moreover, while many of the treated patients saw their economic status markedly decline during the six years, only 17 percent of the unmedicated group saw their incomes drop, and 59 percent saw their incomes rise. “The untreated individuals described here had milder and shorter-lived illnesses [than those who were treated], and, despite the absence of treatment, did not show significant changes in socioeconomic status in the long term,” Coryell wrote.61

One-Year Outcomes in WHO Screening Study for Depression

The WHO investigators reported that a higher percentage of the unmedicated group recovered, and that “continuing depression” was highest in those treated with an antidepressant. Source: Goldberg, D. “The effects of detection and treatment of major depression in primary care.” British Journal of General Practice 48 (1998): 1840–44.

The Risk of Disability for Depressed Patients

This was a study of 1,281 employees in Canada who went on short-term disability due to depression. Those who took an antidepressant were more than twice as likely to go on to long-term disability. Source: Dewa, C. “Pattern of antidepressant use and duration of depression-related absence from work.” British Journal of Psychiatry 183 (2003): 507–13.

Several countries also observed that following the arrival of the SSRIs, the number of their citizens disabled by depression dramatically increased. In Britain, the “number of days of incapacity” due to depression and neurotic disorders jumped from 38 million in 1984 to 117 million in 1999, a threefold increase.62 Iceland reported that the percentage of its population disabled by depression nearly doubled from 1976 to 2000. If antidepressants were truly helpful, the Iceland investigators reasoned, then the use of these drugs “might have been expected to have a public health impact by reducing disability, morbidity, and mortality due to depressive disorders.”63 In the United States, the percentage of working-age Americans who said in health surveys that they were disabled by depression tripled during the 1990s.64

NIMH’s Study of Untreated Depression

In this study, the NIMH investigated the naturalistic outcomes of people diagnosed with major depression who got treatment and those who did not. At the end of six years, the treated patients were much more likely to have stopped functioning in their usual societal roles and to have become incapacitated. Source: Coryell, W. “Characteristics and significance of untreated major depressive disorder.” American Journal of Psychiatry 152 (1995): 1124–29.

There is one final study we need to review. In 2006, Michael Posternak, a psychiatrist at Brown University, confessed that “unfortunately, we have little direct knowledge regarding the untreated course of major depression.” The poor long-term outcomes detailed in APA textbooks and the NIMH studies told the story of medicated depression, which might be a very different beast. To study what untreated depression might be like in modern times, Posternak and his collaborators identified eighty-four patients enrolled in the NIMH’s Psychobiology of Depression program who, after recovering from an initial bout of depression, subsequently relapsed but did not then go back on medication. Although these patients were not a “never-exposed” group, Posternak could still track their “untreated” recovery from this second episode of depression. Here were the results: Twenty-three percent recovered in one month, 67 percent in six months, and 85 percent within a year. Kraepelin, Posternak noted, had said that untreated depressive episodes usually cleared up within six to eight months, and these results provided “perhaps the most methodologically rigorous confirmation of this estimate.”65

The old epidemiological studies were apparently not so flawed after all. This study also showed why six-week trials of the drugs had led psychiatry astray. Although only 23 percent of the unmedicated patients were recovered after one month, spontaneous remissions continued after that at the rate of about 2 percent per week, and thus at the end of six months, two-thirds were depression free. It takes time for unmedicated depression to lift, and that is missed in short-term trials. “If as many as 85% of depressed individuals who go without somatic treatment spontaneously recover within one year, it would be extremely difficult for any intervention to demonstrate a superior result to this,” Posternak said.66

It was just as Joseph Zubin had warned in 1955: “It would be foolhardy to claim a definite advantage for a specified therapy without a two- to five-year follow-up.”67

Nine Million and Counting

We can now see how the antidepressant story all fits together, and why the widespread use of these drugs would contribute to a rise in the number of disabled mentally ill in the United States. Over the short term, those who take an antidepressant will likely see their symptoms lessen. They will see this as proof that the drugs work, as will their doctors. However, this short-term amelioration of symptoms is not markedly greater than what is seen in patients treated with a placebo, and this initial use also puts them onto a problematic long-term course. If they stop taking the medication, they are at high risk of relapsing. But if they stay on the drugs, they will also likely suffer recurrent episodes of depression, and this chronicity increases the risk that they will become disabled. The SSRIs, to a certain extent, act like a trap in the same way that neuroleptics do.

We can also track the rise in the number of people disabled by depression during the antidepressant era. In 1955, there were 38,200 people in the nation’s mental hospitals due to depression, a per-capita disability rate of 1 in 4,345. Today, major depressive disorder is the leading cause of disability in the United States for people ages fifteen to forty-four. According to the NIMH, it affects 15 million American adults, and researchers at Johns Hopkins School of Public Health reported in 2008 that 58 percent of this group is “severely impaired.”68 That means nearly nine million adults are now disabled, to some extent, by this condition.

It’s also important to note that this disability doesn’t arise solely from the fact that people treated with antidepressants are at high risk of suffering recurrent episodes of depression. SSRIs also cause a multitude of troubling side effects. These include sexual dysfunction, suppression of REM sleep, muscle tics, fatigue, emotional blunting, and apathy. In addition, investigators have reported that long-term use is associated with memory impairment, problem-solving difficulties, loss of creativity, and learning deficiencies. “Our field,” confessed Maurizio Fava and others at Massachusetts General Hospital in 2006, “has not paid sufficient attention to the presence of cognitive symptoms emerging or persisting during long-term antidepressant treatment…. These symptoms appear to be quite common.”69

Animal studies have also produced alarming results. Rats fed high doses of SSRIs for four days ended up with neurons that were swollen and twisted like corkscrews. “We don’t know if the cells are dying,” the researchers from Jefferson Medical College in Philadelphia wrote. “These effects may be transient and reversible. Or they may be permanent.”70 Other reports have suggested that the drugs may reduce the density of synaptic connections in the brain, cause cell death in the hippocampus, shrink the thalamus, and trigger abnormalities in frontal-lobe function. None of these possibilities has been well studied or documented, but something is clearly going amiss if symptoms of cognitive impairment in long-term users of antidepressants are “quite common.”

Melissa

I interviewed a number of people who receive SSI or SSDI due to depression, and many told stories similar to Melissa Sances’s. They first took an antidepressant when they were in their teens or early twenties, and the drug worked for a time. But then their depression returned, and they have struggled with depressive episodes ever since. Their stories fit to a remarkable degree with the long-term chronicity detailed in the scientific literature. I also caught up with Melissa a second time, nine months after our first interview, and her struggles remained much the same. In the fall of 2008, she started taking a high dose of a monoamine oxidase inhibitor, which provided a few weeks of relief, and then her depression returned with a vengeance. She was now considering electroshock therapy, and as we ate lunch at a Thai restaurant, she spoke, in a wistful manner, of how she wished her treatment could have been different.

“I do wonder what might have happened if [at age sixteen] I could have just talked to someone, and they could have helped me learn about what I could do on my own to be a healthy person. I never had a role model for that. They could have helped me with my eating problems, and my diet and exercise, and helped me learn how to take care of myself. Instead, it was you have this problem with your neurotransmitters, and so here, take this pill Zoloft, and when that didn’t work, it was take this pill Prozac, and when that didn’t work, it was take this pill Effexor, and then when I started having trouble sleeping, it was take this sleeping pill,” she says, her voice sounding more wistful than ever. “I am so tired of the pills.”

* The caveat with the naturalistic studies is that the unmedicated cohort, at the moment of initial diagnosis, may not be as depressed as those who go on drugs. Furthermore, those who eschew drugs may also have a greater “inner resilience.” Even given these caveats, we should be able to gain a sense of the course of unmedicated depression from the naturalistic studies, and see how it compares to the course of depression treated with antidepressants.

* This study powerfully illustrates why we, as a society, may be deluded about the merits of antidepressants. Seventy-three percent of those who took an antidepressant returned to work (another 8 percent quit or retired), and undoubtedly many in that group would tell of how the drug treatment helped them. They would become societal voices attesting to the benefits of this paradigm of care, and without a study of this kind, there would be no way to know that the medications were, in fact, increasing the risk of long-term disability.

If you find an error please notify us in the comments. Thank you!