Pages

Tuesday, June 18, 2013

Ethan Watters - The Problem With Psychiatry, the ‘DSM,’ and the Way We Study Mental Illness

From Pacific Standard magazine, Ethan Watters takes a look at the DSM and the history of diagnosis in psychiatry. Watters explains how the diagnostic manual determines which mental illnesses are "legitimate," which have legal standing, which get reimbursed by insurance, and how all of this has the subtext of defining our people and our culture.

The Problem With Psychiatry, the ‘DSM,’ and the Way We Study Mental Illness


Psychiatry is under attack for not being scientific enough, but the real problem is its blindness to culture. When it comes to mental illness, we wear the disorders that come off the rack.


June 3, 2013 • By Ethan Watters

In the 1880s, women by the tens of thousands displayed the distinctive signs of hysteria: convulsive fits, facial tics, spinal irritation, sensitivity to touch, leg paralysis. 
(ILLUSTRATION: MICHELLE THOMPSON)

Imagine for a moment that the American Psychiatric Association was about to compile a new edition of its Diagnostic and Statistical Manual of Mental Disorders. But instead of 2013, imagine, just for fun, that the year is 1880.

Transported to the world of the late 19th century, the psychiatric body would have virtually no choice but to include hysteria in the pages of its new volume. Women by the tens of thousands, after all, displayed the distinctive signs: convulsive fits, facial tics, spinal irritation, sensitivity to touch, and leg paralysis. Not a doctor in the Western world at the time would have failed to recognize the presentation. “The illness of our age is hysteria,” a French journalist wrote. “Everywhere one rubs elbows with it.”

Hysteria would have had to be included in our hypothetical 1880 DSM for the exact same reasons that attention deficit hyperactivity disorder is included in the just-released DSM-5. The disorder clearly existed in a population and could be reliably distinguished, by experts and clinicians, from other constellations of symptoms. There were no reliable medical tests to distinguish hysteria from other illnesses then; the same is true of the disorders listed in the DSM-5 today. Practically speaking, the criteria by which something is declared a mental illness are virtually the same now as they were over a hundred years ago.

The DSM determines which mental disorders are worthy of insurance reimbursement, legal standing, and serious discussion in American life. That its diagnoses are not more scientific is, according to several prominent critics, a scandal. In a major blow to the APA’s dominance over mental-health diagnoses, Thomas R. Insel, director of the National Institute of Mental Health, recently declared that his organization would no longer rely on the DSM as a guide to funding research. “The weakness is its lack of validity,” he wrote. “Unlike our definitions of ischemic heart disease, lymphoma, or AIDS, the DSM diagnoses are based on a consensus about clusters of clinical symptoms, not any objective laboratory measure. In the rest of medicine, this would be equivalent to creating diagnostic systems based on the nature of chest pain or the quality of fever.” As an alternative, Insel called for the creation of a new, rival classification system based on genetics, brain imaging, and cognitive science.

This idea—that we might be able to strip away all subjectivity from the diagnosis of mental illness and render psychiatry truly scientific—is intuitively appealing. But there are a couple of problems with it. The first is that the science simply isn’t there yet. A functional neuroscientific understanding of mental suffering is years, perhaps generations, away from our grasp. What are clinicians and patients to do until then? But the second, more telling problem with Insel’s approach lies in its assumption that it is even possible to strip culture from the study of mental illness. Indeed, from where I sit, the trouble with the DSM— both this one and previous editions—is not so much that it is insufficiently grounded in biology, but that it ignores the inescapable relationship between social cues and the shifting manifestations of mental illness.
It is true that the DSM has a great deal of influence in modern America, but it may be more of a scapegoat than a villain.

PSYCHIATRY TENDS NOT TO learn from its past. With each new generation, psychiatric healers dismiss the enthusiasms of their predecessors by pointing out the unscientific biases and cultural trends on which their theories were based. Looking back at hysteria, we can see now that 19th-century doctors were operating amidst fanciful beliefs about female anatomy, an assumption of feminine weakness, and the Victorian-era weirdness surrounding female sexuality. And good riddance to bad old ideas. But the more important point to take away is this: There is little doubt that the symptoms expressed by those thousands of women were real.

The resounding lesson of the history of mental illness is that psychiatric theories and diagnostic categories shape the symptoms of patients. “As doctors’ own ideas about what constitutes ‘real’ dis-ease change from time to time,” writes the medical historian Edward Shorter, “the symptoms that patients present will change as well.”

This is not to say that psychiatry wantonly creates sick people where there are none, as many critics fear the new DSM-5 will do. Allen Frances—a psychiatrist who, as it happens, was in charge of compiling the previous DSM, the DSM-IV—predicts in his new book, Saving Normal, that the DSM-5 will “mislabel normal people, promote diagnostic inflation, and encourage inappropriate medication use.” Big Pharma, he says, is intent on ironing out all psychological diversity to create a “human monoculture,” and the DSM-5 will facilitate that mission. In Frances’ dystopian post-DSM-5 future, there will be a psychoactive pill for every occasion, a diagnosis for every inconvenient feeling: “Disruptive mood dysregulation disorder” will turn temper tantrums into a mental illness and encourage a broadened use of antipsychotic drugs; new language describing attention deficit disorder that expands the diagnostic focus to adults will prompt a dramatic rise in the prescription of stimulants like Adderall and Ritalin; the removal of the bereavement exclusion from the diagnosis of major depressive disorder will stigmatize the human process of grieving. The list goes on.

In 2005, a large study suggested that 46 percent of Americans will receive a mental-health diagnosis at some point in their lifetimes. Critics like Frances suggest that, with the new categories and loosened criteria in the DSM-5, the percentage of Americans thinking of themselves as mentally ill will rise far above that mark.

But recent history doesn’t support these fears. In 1994 the DSM-IV—the edition Frances oversaw—launched several new diagnostic categories that became hugely popular among clinicians and the public (bipolar II, attention deficit hyperactivity disorder, and social phobia, to name a few), but the number of people receiving a mental-health diagnosis did not go up between 1994 and 2005. In fact, as psychologist Gary Greenberg, author of The Book of Woe, recently pointed out to me, the prevalence of mental health diagnoses actually went down slightly. This suggests that the declarations of the APA don’t have the power to create legions of mentally ill people by fiat, but rather that the number of people who struggle with their own minds stays somewhat constant.

What changes, it seems, is that they get categorized differently depending on the cultural landscape of the moment. Those walking worried who would have accepted the ubiquitous label of “anxiety” in the 1970s would accept the label of depression that rose to prominence in the late 1980s and the 1990s, and many in the same group might today think of themselves as having social anxiety disorder or ADHD.

Viewed over history, mental health symptoms begin to look less like immutable biological facts and more like a kind of language. Someone in need of communicating his or her inchoate psychological pain has a limited vocabulary of symptoms to choose from. From a distance, we can see how the flawed certainties of Victorian-era healers created a sense of inevitability around the symptoms of hysteria. There is no reason to believe that the same isn’t happening today. Healers have theories about how the mind functions and then discover the symptoms that conform to those theories. Because patients usually seek help when they are in need of guidance about the workings of their minds, they are uniquely susceptible to being influenced by the psychiatric certainties of the moment. There is really no getting around this dynamic. Even Insel’s supposedly objective laboratory scientists would, no doubt, inadvertently define which symptoms our troubled minds gravitate toward. The human unconscious is adept at speaking the language of distress that will be understood.


WHY DO PSYCHIATRIC DIAGNOSES fade away only to be replaced by something new? The demise of hysteria may hold a clue. In the early part of the 20th century, the distinctive presentation of the disorder began to blur and then disappear. The symptoms began to lose their punch. In France this was called la petite hysterie. One doctor described patients who would “content themselves with a few gesticulatory movements, with a few spasms.” Hysteria had begun to suffer from a kind of diagnostic overload. By 1930s or so, the dramatic and unmistakable symptoms of hysteria were vanishing from the cultural landscape because they were no longer recognized as a clear communication of psychological suffering by a new generation of women and their healers.

It is true that the DSM has a great deal of influence in modern America, but it may be more of a scapegoat than a villain. It is certainly not the only force at play in determining which symptoms become culturally salient. As Frances suggests, the marketing efforts of Big Pharma on TV and elsewhere have a huge influence over which diagnoses become fashionable. Some commentators have noted that shifts in diagnostic trends seem uncannily timed to coincide with the term lengths of the patents that pharmaceutical companies hold on drugs. Is it a coincidence that the diagnosis of anxiety diminished as the patents on tranquilizers ran out? Or that the diagnosis of depression rose as drug companies landed new exclusive rights to sell various antidepressants? Consider for a moment that the diagnosis of depression didn’t become popular in Japan until Glaxo-Smith-Klein got approval to market Paxil in the country.

Journalists play a role as well: We love to broadcast new mental-health epidemics. The dramatic rise of bulimia in the United Kingdom neatly coincided with the media frenzy surrounding the rumors and subsequent revelation that Princess Di suffered from the condition. Similarly, an American form of anorexia hit Hong Kong in the mid-1990s just after a wave of local media coverage brought attention to the disorder.

The trick is not to scrub culture from the study of mental illness but to understand how the unconscious takes cues from its social settings. This knowledge won’t make mental illnesses vanish (Americans, for some reason, find it particularly difficult to grasp that mental illnesses are absolutely real and culturally shaped at the same time). But it might discourage healers from leaping from one trendy diagnosis to the next. As things stand, we have little defense against such enthusiasms. “We are always just one blockbuster movie and some weekend therapist’s workshops away from a new fad,” Frances writes. “Look for another epidemic beginning in a decade or two as a new generation of therapists forgets the lessons of the past.” Given all the players stirring these cultural currents, I’d make a sizable bet that we won’t have to wait nearly that long.

No comments:

Post a Comment