Were we all asleep at the switch? A personal reminiscence of psychiatry from 1940 to 2010
This is the last paper that my stepfather, Leon Eisenberg, worked on before his death on September 15, 2009. Revisions were well on their way. I have tried to complete the paper for him. When the manuscript is in the first person singular it is Leon Eisenberg ‘speaking.’ All felicitous phrases and insights are his; any problems with the paper undoubtedly rest with me. Laurence B. Guttmacher, MD.
While academic psychiatrists sought evidence in clinic and laboratory for health-related decisions, the ‘monetarization of medicine’ (1) overruled science and made large de facto decisions for the profession.
There have been enormous changes in psychiatry during the nearly 70 years since I entered medical school in 1940:
- i) The modalities of treatment;
- ii) The venues in which treatment is provided;
- iii) The numbers and kinds of health personnel who provide care;
- iv) The organization and financing of medical care; and in
- v) The theories and practices of psychiatri c diagnosis and treatment.
Just how extraordinary those changes have been, was not fully apparent to me until I sat down to prepare this memoir. While we were occupied with debates about issues internal to our field, changes in organization and financing of care proved to be decisive in its evolution.
In 1956 the United States had 550 000 mental hospital beds (2). The prediction was that the number of hospitalized mental patients would climb to 700 000 by the year 2000, yet the reverse occurred. By 1998 the supply of inpatient beds as measured by beds per capita was a quarter of that in 1956. The predominant venue of care shifted to the outpatient department, to chronic disease hospitals and nursing homes for geriatric patients, and to jails. When I entered the field, the university psychiatric departments in the leading medical schools were predominantly psychoanalytic in orientation, and most of the residents undertook a didactic psychoanalysis in the course of their training.
By the 1960s, treatment had been medicalized. The first psychotropic drugs were discovered by serendipity and introduced into psychiatry. The symptom relief they brought was so startling and persuasive that there was a major shift from psychologic to pharmacological treatment.
The financing of medical education during World War II was through the federal government. Male medical students (and 95% were male) were automatically drafted into the Armed Forces, provided with tuition, books, meals, dormitory facilities, and marched off to the cafeteria and classrooms! We had no financial indebtedness! With the end of the War, the doctor draft ended and tuition became a personal responsibility. By 2009, mean student indebtedness is $156 456! (3)
As World War Two dawned, it became clear that the number of psychiatrists in the United States (about 2500 in 1940) was far short of the total needed and rapid, short training programs were introduced to gear up internists to function as neuropsychiatrists. Some of them found the field attractive enough to undertake formal psychiatric training after the War. The large number of returning veterans with psychiatric disorders spurred the veteran’s administration (VA) to finance expanded psychiatric residency training slots. The result was that the number of psychiatrists rose from 2500 in 1940 to about 6500 in 1960. Outpatient psychiatrists, often psychoanalytically oriented, tended to cluster in large cites with analytic institutes. In 1954, 20% of practicing psychiatrists in the US were located in Manhattan (4).
As recently as 1977, 64% of psychiatric visits were exclusively for psychotherapy with no prescription provided; in 2002 this was true for <10% of visits to psychiatrists. A survey of office-based psychiatrists looking only at visits over half an hour found that only 19.1% of psychiatrists provided psychotherapy to all of their patients in 1997; this further declined to 10.8% by 2005. Inclusion only of outpatient visits more than 30 min undoubtedly inflated the prevalence of psychotherapy (5). Training in psychotherapy, once abundant, now is neglected.
Changing conceptualizations of mental disorders and their treatment
As the 19th century ended, psychiatric patients and the doctors who cared for them remained isolated in remote asylums, stigmatized by the fear and shame the patients (and their diseases) aroused. At their Annual Meeting in 1894, American asylum psychiatrists were castigated by S. Weir Mitchell, Professor of Neurology at the University of Pennsylvania, in these terms:
Want of competent original work is to my mind the worst symptom of torpor the asylums now present…Where…are your careful scientific reports?…You live alone, uncriticized, unquestioned, out of the healthy conflicts and honest rivalries which keep us [neurologists] up to the mark of the fullest possible competence… (6)
Whether or not Mitchell’s rebuke of American psychiatrists was warranted, his criticism of the field certainly was. Only at the turn of the 20th century were the foundations for a research enterprise in psychiatry established; but for a significant period they remained widely spaced oases in an academic desert. As late as 1958, most US medical schools had at best a part-time psychiatric faculty, constantly hectored as barely discriminable from its clientele and heavily dependent on private practice(7). Yet, by the end of the 20th century, every US medical school had an academic department of psychiatry. How did that come about?
Psychoanalysis and mental illness
Whether or not psychoanalysis is a science and just how effective it is as a therapy, it has, nonetheless, had a powerful impact on our field. It provided plausible explanations for the bizarre symptoms patients exhibited. It taught trainees to listen to patients and to try to understand their distress, not simply to classify their diseases or sedate them or lock them away. It highlighted the importance of memory, its vulnerability to distortion, and its centrality to patients’ life narratives, the stories we tell ourselves and others. It made clear how those narratives can be self-defeating and defined the task of therapy as helping patients to reconstruct their autobiographies to permit growth. Psychoanalysis helped psychiatry preserve an abiding interest in the individuality of patients while other medical specialists were losing sight of the patient in their preoccupation with the biology of the disease. It connected the symptoms of mental illness to the psychopathology of everyday life. Psychiatrists learned to help patients by paying attention to their mental symptoms in an era when psychiatry had no procedures. Although Freud saw no role for psychoanalysis in the treatment of the psychoses, his method gave birth to outpatient psychiatric practice. Diagnosis and classification—the hallmarks of the medical approach—became increasingly irrelevant to clinical practice because analytically oriented psychotherapy dealt with individual and family dynamics, rather than with syndromes or diseases.
The influence of psychoanalysis grew apace with the European intellectual migration after the Nazi putsch in Germany. When it was banned from the Congress of Psychology at Munich as ‘a Jewish science’ in October 1933, psychoanalysts in Berlin and Vienna began to migrate to the UK and the US. Jahoda has estimated that some 100–200 European analysts and some 30–50 analytically orientated psychologists emigrated to America in the 1930s (8). That number is small, but the membership of the American Psychoanalytic Association was only 135 in 1936 and almost doubled to 249 by 1944 (B. Canty, personal communication). The European influx was as significant intellectually as it was numerically; many of the refugees enriched post-Freudian psychoanalytic theory and became leaders in the movement. Psychoanalysis became the dominant trend in academic psychiatry in the US. By the early 1960s, although only 10% of American psychiatrists were analysts, more than half of the chairs of medical school departments held membership in psychoanalytic societies. America, became ‘the world center for psychoanalysis’ (8). In contrast, Professor Aubrey Lewis of the Maudsley noted that ‘none of the recognized teachers of psychiatry in the undergraduate medical schools of London is a member of the Psychoanalytical Society’ (9).
How did psychoanalysis come to be so dominant? There was no other psychologic theory that provided what was purported to be so comprehensive an account of the origins of psychopathology. The brain sciences were largely irrelevant to clinical practice. At mid-century, descriptive psychiatrists were held in little esteem because diagnosis was unreliable and made little difference for treatment. The psychiatric pharmacopeia was limited to hypnotics and sedatives. Lack of empirical evidence was not unique to psychiatry. Treatments in all of medicine were based on the authority of clinical experience. New treatments were assessed by the results reported ‘by senior members of the medical profession, who had tried them out on a series of patients…and concluded that the outcome was better than that reported by others or by themselves in the past’ (10). The influence of the authority of one’s teachers, the experience of seeing patients improve during psychotherapy (most non-psychotic patients did), the logic and malleability of psychodynamic explanations and the readiness with which patients desperate for a way out of their dilemmas accepted those explanations combined to make believers of all but the most skeptical of trainees. Those who were non-believers were easily dismissed with ad hominem attacks on their unanalyzed resistance.
By the 1950s’ and 1960s,’ a two-class system of psychiatric care had arisen in the US. Middle and upper class patients (those who could pay out of pocket and those with generous insurance coverage) sought psychoanalytically oriented out-patient psychotherapy with private practitioners. Rogow surveyed a sample of psychoanalysts about the patients they had in treatment (11). Not only were the patients middle or upper class, but not one was Hispanic and very few were black. The yearly cost of an analysis was more than 80% of the median income of an American worker. Psychiatric trainees vied for opportunities to treat young, articulate, and well-educated patients with anxiety disorders. Working class patients with psychoses were cared for in grossly under-resourced state or county mental hospitals. Although many dedicated psychiatrists worked in the public sector, all too many worked in the state hospitals because either they had no choice: they had yet to qualify for full licensure, or their psychiatric training was marginal, or they had limited command of English. The paradox that the most seriously ill patients often receive care from the least well-trained psychiatrists remains the case today.
In 1962, I described my dismay that ‘in some centers…almost all the residents enter personal analysis…in my observation, it has been the bright and not the incompetent, the curious and not the unimaginative residents who have been attracted to psychoanalysis and thus lost to research, university teaching and public service’ (12). My dismay stemmed from (a) restrictions on the resident’s geographic mobility for the duration of a didactic analysis which might last for 5–7 years, (b) the press to earn supplementary income from after hours private practice to pay for the analysis, (c) the acquisition of a therapeutic technique altogether inappropriate to meet public need, and (d) lack of curiosity because they thought they possessed the exclusive road to salvation.
Almost 50 years later, the pendulum has swung so far that some young psychiatrists seem to no longer listen to patients at all. Personal psychotherapy during residency training has become decidedly more unusual. Physician applicants to teaching institutes affiliated with the American Psychoanalytic Association numbered 265 in 1977; they fell to 109 in 1987 and to 88 in 1996 (Myrna Weiss and Joan Abramowitz, personal communication). The institutes do not suffer from a dearth of students; the number of non-physician applicants has risen steadily since the 1986 US Federal Court decision that non-physicians could not be excluded from analytic training programs because such exclusions would constitute restraint of trade. What explains the decline in medical candidates? In part, it stems from the greater allure of competing career lines in psychopharmacology and neuroscience; in part, the reason is economic: medical students graduate with far greater indebtedness than was the case a generation ago when many could afford to undertake a didactic analysis and they are far less likely to have insurance coverage that meaningfully supports psychoanalysis. According to American Association of Medical Colleges data, 81% of graduating US students have educational debts in excess of $100 000. Cost is now a deterrent in view of the debt to be repaid. It is a rare psychiatrist who opts for a research career in psychotherapy. Indeed, few opt for research careers at all, a serious threat to the future of psychiatry (13).
The first double-blind randomized controlled trial (RCT) in medicine, the United Kingdom Medical Research Council (1949) trial of streptomycin for the treatment of tuberculosis, was not carried out until 1949. The RCT rapidly became the gold standard for research in psychopharmacology, but attitudes and beliefs relating to other treatments, notably psychotherapy, all too often were governed by the training physicians had received; research data and controlled clinical trials have developed far more slowly.
Picture: Leon Eisenberg with Leo Kanner, MD in New York May 17, 1960 when Dr. Kanner became the recipient of the First Annual Award of the National Organization for Mentally Ill Children.
The evaluation of the psychotherapies
Through the ‘1950s,’‘1960s’, and ‘1970s’ there was a large psychotherapy sector untroubled by the lack of evidence for effectiveness. Varying schools of thought, each with fierce adherents, battled for supremacy. One of the few serious students of psychotherapy, Jerome Frank, compared research in the field to:
the nightmarish game of croquet in Alice and Wonderland in which the mallets were flamingos, the balls hedgehogs, and the wickets soldiers. Since the flamingo would not keep its head down, the hedgehogs kept unrolling themselves and the soldiers were always wandering to other parts of the field…it was a very difficult game indeed. (14)
Frank recognized that psychotherapy outcomes were better than wait-list comparison groups but remarkably similar to one another despite differences in the theories and techniques to which therapists professed allegiance. He concluded that a number of non-specific psychologic processes were common to successful psychotherapy: an intense confiding relationship with a therapist; a set of explanations for the patient’s distress; suggested alternative ways of dealing with the identified problems; the arousal of hope; and the restoration of morale. His conclusion offended proponents of all the schools of psychotherapy. Two decades later, Smith, Glass, and Miller made a more successful foray when they reported the results of meta-analysis of extant studies of psychotherapy (15). Their book was widely hailed by practitioners as establishing the effectiveness of psychotherapy because most treatments had a significant effect size; however, once again, outcome differences between treatments or between novices and experts were hard to detect.
Myrna Weissman’s paper on the ‘paradox of psychotherapy’ provides an elegant analysis of the development, and present status of Evidence-Based (psycho)Therapies (16). The matter will not be pursued further here except to note the irony that just as certain forms of psychotherapy have proved their worth, the economics of managed care have sharply restricted the ability of practitioners to provide psychotherapy!
Until recently there was no formal requirement that psychiatric residents learn about, let alone acquire competence in, any non-psychodynamically oriented forms of psychotherapy. To the extent psychotherapy is taught at present (rather than being swamped by psychopharmacology), it remains mostly psychodynamic (that is, based on psychoanalytic principles) reflecting the training of the senior teachers in academic programs (17). The Accreditation Council for Graduate Medical Education 2007 program requirements for residency training in psychiatry include (IV.A.5.a.3.e): residents shall develop competencies in ‘applying supportive, psychodynamic, and cognitive-behavioral psychotherapies to both brief and long-term individual practice, as well as to assuring exposure to family, couples, group, and other individual evidence-based psychotherapies’ (18).To address the challenge of measuring competence, the American Association of Directors of Psychiatry Residency Training has established a task force (with assistance from experts in each modality of psychotherapy) to operationalize these competencies in order to assess residents’ performance and to plan for remediation if they fall short (Lisa Mellman, personal communication). The decision to evaluate education by measuring competencies rather than by number of seminars attended, number of patients seen and years of training is a major positive change.
The brain as the organ of the mind
In the last half of the 19th Century, progress in pathology and bacteriology uncovered the pathogenesis of many diseases; yet there was disappointingly little progress about mental disorders. Thus, it was a major event in 1913 when Noguchi and Moore found Treponema pallidum in the brain of patients with general paresis, just 8 years after Schaudinn and Hoffman identified the spirochete as the cause of syphilis. General paresis was an appealing model for the pathogenesis of psychosis. In its early stages, it could mimic any psychiatric disorder. Success in unraveling its pathogenesis as a manifestation of tertiary syphilis appeared to presage similar discoveries for the other psychoses. The long and hitherto fruitless search for brain pathology appeared to be over; the discovery of the spirochete in the brain was seen as the rebirth of neuropathology.
To the distress of neuropsychiatrists, dementia praecox and manic depressive psychosis continued to be impervious to laboratory research. Report after report of the discovery of histologic lesions in the brain proved to be as a result of artifacts. Neurochemistry was unsuccessful in manipulating brain cerebrosides and other insoluble tissue components. The failure of available methods to reveal pathologic changes in the brain in schizophrenia and manic depressive disease crystallized a belief that they were psychogenic. The paucity of evidence for an infectious etiology for schizophrenia did not stop many clinicians. Focal sepsis had its vogue in the 1920s with many patients losing teeth, uteri, and parts of their intestines as treatment for their schizophrenia (19).
During the 1950s and 1960s, when clinical psychiatry in the United States enjoyed a considerable expansion in faculty representation, in time in the medical curriculum, and in recruits to its ranks, its methods were primarily psychologic; its departments were staffed by clinicians; training emphasized the meaning of the present illness in the context of the patient’s past and the therapeutic use of the doctor patient relationship. Psychiatry became particularly attractive to the clinically oriented medical student who found it one of the few specialties with a persisting concern for the patient as a person in an era of organ-centered medicine. General practice, which might have provided an alternative, had almost ceased to enlist further recruits in the US because of its low prestige, difficult working conditions, and isolation from the medical mainstream. By the 1970s in the era of social activism resulting from Vietnam recruitment into psychiatry was eroded by a new commitment to primary care. People-oriented medical students who were weighing careers in psychiatry (I chatted with many in the 1970s) opted for primary care in the expectation they could intervene medically and psychologically. My warnings that the economics of primary care (the press of cost controls was already evident) would leave them precious little time to take a thorough history, let alone provide counseling, were unavailing.
The shift from mind to brain
During the 1950s the care of patients with psychotic disorders was radically changed by a series of chance discoveries: of reserpine’s psychotropic effects when it was used to treat hypertension; of chlorpromazine as a tranquilizer during research on anesthesia; of iproniazid as a euphoriant when it was used to treat tuberculosis; of the antidepressant properties of imipramine in therapeutic trials for neuroleptic effects (20); and of the anti-manic effects of lithium when Cade (21) found that the lithium urate (chosen solely because of solubility in urine) caused sedation in guinea pigs. The serendipity of these findings does not minimize their importance but it does emphasize the lack of coherent biologic theories to guide their discovery; those theories emerged post hoc in the effort to account for the empirical findings.
The new therapeutic armamentarium had major consequences for the practice of psychiatry. It provided means for aborting acute psychotic episodes and for minimizing recurrences. Because remissions could be induced in a relatively short time frame (making insurance coverage feasible), psychiatric units in general hospitals expanded rapidly. Because the new agents were thought to be relatively syndrome specific, diagnosis and classification now became important for effective patient care and paved the way for diagnostic manual (DSM)-III and -IV. The new diagnostic scheme is a major advance over DSM-I and II. But with each iteration it becomes more fragmented and bureaucratized. It has become an industry—and a profitable one at that—for the American Psychiatric Association which makes tens of millions of dollars with each new edition because a DSM-IV code is the precondition for reimbursement. The situation has begun to resemble the debate among three umpires about the meaning of balls and strikes in the great American game of baseball. The first, a modest man, claimed only: ‘I calls’ em as I sees’ em.’ The second, an arrogant and officious man, insisted: ‘I calls’ em as they is!’ The third, Bill Klem, a man of philosophic bent, dismissed their comments with: ‘They may be balls, they may be strikes, but they ain’t nothin’ until I call ‘em!’
Not the least of the benefits of psychopharmacology was the development of methodologies for the double-blind evaluation of the new therapeutic agents, drugs, social interventions and psychotherapies (22). The discovery of psychotropic drugs stimulated the development of the neurosciences, which have flowered in an extraordinary fashion. The Society for Neuroscience, founded in 1969, had 1000 members in 1970. It was interdisciplinary in that its founders were neuroanatomists, neurochemists, neurophysiologists, neuropharmacologists, brain imagers, and clinical scientists: neurologists and psychiatrists. In the past 40 years, membership has multiplied 35-fold! Meetings have become a challenge to organize, getting from one session to another an exercise in agility and the camaraderie of earlier years is efflorescing. The growth of the Society has been so prodigious, the territory it covers so broad, and the methods it employs so varied that neuroscience itself is beginning to fragment into sub-disciplines, of which cognitive neuroscience is an instance.
The success of neuroscience has exacted costs. The very elegance of research in neuroscience has led psychiatry to focus so exclusively on the brain as an organ that the experience of the patient as a person has receded below the horizon of our vision. We had for so long been pilloried by our medical and surgical colleagues as witchdoctors and wooly minded thinkers that many of us now seek professional respectability by adhering to a reductionistic model of mental disorder. We have traded the one-sidedness of the brainless psychiatry of the first half of the 20th century for a mindless psychiatry of the second half (23). Even psychoanalysts have found it convenient to recall that Freud (24), himself a product of 19th century reductionism, cautioned his followers that ‘all our provisional ideas will some day be based on an organic substructure…we take this possibility into account when we substitute special forces in the mind for special chemical substances.’
Inheritance and genetics
Fifty years ago, genetics was anathema in psychiatry. Now it is all the rage. The pendulum has swung from it is all nurture to it is all nature. What accounts for the intellectual tsunami? It had been known since antiquity that like breeds like and that mental diseases cluster in families, observations compatible with inheritance. What was inherited and how it was inherited remained a mystery. The first real clue, the work of Mendel in the 19th century, was lost in obscure publications and was not rediscovered until the turn of the 20th century. Racist assumptions were ubiquitous. The eugenics movement was American in origin and had strong support from many psychiatrists. The ‘feeble minded’ who were not sterilized could find themselves involuntarily institutionalized until their reproductive potential was finished (25).
The conflation of genetics with Nazi racist ideology thoroughly discredited genetics in the decades after World War II. Eric Strömgren (26) reports that in the 1920s and 1930s, most academic and asylum psychiatrists in Europe believed that schizophrenia and manic depressive disorder were inherited; after the war genetics had become a dirty word. He was unable to discuss with most American psychiatrists even ‘the possibility of a genetic contribution to etiology.’ Strömgren ascribed their negative attitudes to their fealty to psychoanalysis; but the aversion to Nazism was no less instrumental.
At mid-century, there was a huge intellectual chasm between hereditarians and environmentalists based on common shared misconceptions. Both sides mistook genes for fate; both believed that genotype determines phenotype. Environmentalists rejected the therapeutic pessimism implicit in a reductionistic view of genetics. They preferred to view the organism as a tabula rasa and sought the psychogenic origins of psychosis. They dismissed Kallmann’s (27) twin studies showing remarkably high (86%) concordance rates for schizophrenia in monozygotic (MZ) vs. 14% in dizygotic (DZ) twins. Kallmann’s work was subjected to unrelenting and justified methodologic criticism; it suffered from ascertainment bias; lack of blinding when co-twins were evaluated; fuzzy diagnostic categories; and the like. But the baby went out with the bathwater. More rigorous studies have found much lower pair-wise MZ concordance rates (30–40%) but the MZ/DZ differences are robust enough to demonstrate a role for inheritance (28). Moreover, concordance rates for MZ monochorionic twins are far higher than for those who do not share a placenta, suggesting that the intrauterine environment must play a significant role (29). The critics of genetic determinism failed to apply the same methodological scrutiny to their own even woolier hypotheses (the schizophrenogenic mother, schizophrenia as the outcome of faulty communication in the family; the schizophrenic patient as a rebel in an insane world, etc.).
The issue of diagnostic reliability was brought into focus by Mort Kramer (30) who initiated studies on the puzzling discrepancy between US and UK data on the prevalence of schizophrenia and depression (higher rates for schizophrenia and lower rates for depression in the US than in the UK). National Institute of Mental Health (NIMH) funding enabled panels of UK and US psychiatrists to examine a common set of American and British patients via videotape. The findings were unequivocal; it was not disease prevalence, but disease criteria that differed between the two countries. Once criteria were standardized, the difference largely evaporated. That study gave impetus to the development of standardized psychiatric interviews and led to an operationalized diagnostic manual (DSM-III) in 1980.
The NIMH underwrote a comprehensive study of the prevalence of mental disorders in the United States employing the new instruments. Populations were sampled in five Epidemiologic Catchment Areas (New Haven, Baltimore, St. Louis, Durham and Los Angeles). Not only was diagnosable mental disorder found to be common (overall annual prevalence: 20%), but only one in five of those who met criteria for a mental or addictive disorder were actually receiving care (31). The majority of those in need got such care as they received from primary care practitioners rather than from specialist mental health services. Regier et al. (32) coined the phrase: ‘the de facto US mental health services system’ to describe the pattern of care actually available in the community as opposed to the system on paper.
The message was unambiguous: the magnitude of the need for treatment is such that the only possible public health solution is to enhance the capacity of the primary health-care system to provide mental health treatment (33). Epidemiology had brought home forcefully the dimensions of the problem, dimensions that hospital and clinic-based studies could not have revealed (34). Psychiatry has not yet fashioned an adequate response. Psychiatrists have yet to make a commitment to improve the skills of primary care providers, to make themselves available as consultants for patients who fail to respond to treatment, and to be directly responsible for only the most difficult cases. The integration of psychiatry and medicine remains to be achieved.
Catastrophic events and psychiatry
World War II
Of some 18 million men screened for the military draft, almost 2 million were rejected for emotional or mental defect. Another three quarters of a million were prematurely separated from the service for psychiatric symptoms. During World War I draft rejection rates were at most 2%. Yet rates of breakdown in service were 12% in World War II as against 2% in the first war! The screening criteria used during World War II which led to rejection of candidates with high trait anxiety were ineffective. Social and environmental determinants proved far more important to success in the military than were putative screening measures.
Under combat conditions, the Army Medical Corps relearned the lessons of World War I: the key role of forward treatment for exhaustion rather than neurosis; the importance of unit morale and group cohesion in maintaining the effectiveness of soldiers and reducing breakdown; and the inability to screen out inevitable psychiatric casualties. Rates of psychiatric morbidity were found to depend on unit and combat environment as well as individual susceptibility. Appropriate interventions could return the majority of psychiatric casualties to combat duty (35).
The War had an extraordinary impact on American psychiatry. It initiated the process of revising diagnosis and classification; it resulted in a marked expansion in psychiatric manpower as many general medical officers from the service found psychiatry an attractive option when they returned to civilian life and further training; it fostered federal support for psychiatric research; and, perhaps most important of all, it sparked the development of an exigent approach to care (the open hospital and community-based treatment).
The Standard Nomenclature of Disease, based on case experience in state mental hospitals and adopted in 1934, proved entirely unsatisfactory for use by psychiatrists in induction stations, in military service, and in the Veterans Administration. The Armed Forces undertook a sweeping revision of the classification in 1945, as did the Veterans Administration; the result was three US systems, none fully compatible with the International Statistical Classification! With support from the NIMH, the American Psychiatric Association prepared its first Diagnostic and Statistical Manual of Mental Disorders (36) which became the benchmark, to be followed by editions in 1968, 1980, and 1994, with another promised in 2012.
The War led to a substantial expansion in psychiatric manpower. Trained psychiatrists were so few in number when WW II began that the Armed Services had to press general physicians into psychiatric duty after short training courses (producing derisively labeled 90 day wonders). Many of those pressed into duty became so engrossed by their clinical experience that they undertook formal psychiatric training at the War’s end. Membership in the American Psychiatric Association, only 2423 in 1940, more than doubled to 5856 in 1950; 10 years later, it almost doubled again to 11 037.
The United States is unique in that clinically trained psychologists, social workers, and other mental health clinicians far outnumber psychiatrists. According to data from the NIMH Survey and Analysis Branch (2001), there are only some 41 000 clinically trained psychiatrists in the US in contrast to about 77 000 clinical psychologists, 96 000 social workers, and 83 000 registered nurses in mental health organizations. In addition, there are 108 000 counselors and 44 000 marriage and family therapists. Because psychologists and social workers are eligible for reimbursement as independent providers of care and because counselors are employed to provide care by managed care organizations (MCO), competition in the US mental health ‘market place’ is intense. This competition has contributed to a significant decrease in psychiatrists’ inflation-adjusted salaries (2). The professional societies representing each group joust over hegemony. For comparison, England (excluding Scotland, Wales, and Northern Ireland) with a population of 50 million has about 6400 psychiatrists (2600 of them consultants), 4700 psychologists (including trainees), and 36 000 qualified psychiatric nurses, whom Professor David Goldberg regards as ‘the mainstay of UK mental health services’ (personal communication 2001).
In the aftermath of a war in which scientific research had played so vital a role in the allied victory, the National Mental Health Act was passed overwhelmingly by the Congress in 1946. The Act gave the new National Institute of Mental Health a mandate to foster psychiatric research.
Lastly, the focus on the situational determinants of breakdown, on the importance of exigent treatment and the need for the rapid reintegration into social roles provided impetus for brief hospitalization, open hospitals, group methods and the community mental health movement.
[ Leon Eisenberg with Leo Kanner, M.D. in New York May 17, 1960 when Dr. Kanner became the recipient of the First Annual Award of the National Organization for Mentally Ill Children. ]
[ Leon Eisenberg with his wife, Carola Eisenberg, at the Julius Richmond Symposium on Child Health and Development in the 21st Century, Boston, Massachusetts, September 26, 2006. ]
The Vietnam War
The most striking psychiatric phenomenon in Vietnam was the low rate of identified psychiatric casualties and the relative absence of combat fatigue. Unique to Vietnam was the inverse relationship between rates of servicemen wounded in action and those who became neuropsychiatric casualties. As the war dragged on, an increasing number of characterological problems surfaced: racial incidents, disciplinary problems, and substance use. Was the low rate of psychiatric casualties related to the high rates of substance use?
Concern about substance abuse among service men in Vietnam paralleled concern about drug use in the US. The availability of cheap heroin ($6/day to maintain a habit) grown in the golden triangle of Thailand, Burma, and Laos assured easy access. At peak use in October 1971, Robins (37) estimates that almost half (45%) of Army enlisted men were using narcotics (heroin or opium) and better than 75% marijuana and alcohol. Almost half of narcotic users reported themselves to have been addicted. Yet, on follow-up, Robins found that very few of the identified heroin users continued regular use after demobilization. Most who did had been addicted prior to service. For the others, substance availability, boredom, absence of family and community, and lack of commitment to an unpopular war led to high user rates which abated promptly after leaving Vietnam.
Persistent mental distress after exposure to catastrophic situations is as old as recorded history but the formal identification of what we now know as the Post Traumatic Stress Disorder (intrusion of memories of trauma, numbing and avoidance, and hyperarousal to stimuli evoking recollections) only took place in the aftermath of the Vietnam War. Post-traumatic stress disorder (PTSD) did not appear in the official psychiatric nomenclature until DSM-III (1980). A political struggle waged by social workers and activists on behalf of Vietnam veterans who demanded acknowledgement of their distress (38) was instrumental in legitimating PTSD as a psychiatric diagnosis.
Venues of care
State and county mental hospital system
The number of in-patients in state and county mental hospitals continued to increase dramatically during the first half of the 20th century: from 188 000 in 1910 to 512 000 in 1950. At that rate of growth, the census was projected to exceed 700 000 within 20 years. Instead, it peaked at 550 000 in 1956, slowly receded in the next two decades (to 535 000 in 1960 and to 338 000 in 1970), and fell precipitously in the last 25 years to about 190 000 in 2000 (2).
Through the first half of the 20th century, the mental hospital system functioned to protect communities and families from dealing with distressed and often distressing patients. Economies of scale rationalized increasing size; the patient’s quality of life was not part of the cost-benefit equation. Institutions operated on rigid schedules tailored to bureaucratic needs. Locked doors, loss of personal control, the regimentation of everyday life, separation from family and community, and unoccupied days of hopeless despair led to a ‘social breakdown syndrome’ superimposed on the initial illnesses that led to admission (39). The longer the stay, the sicker the patient became. The symptoms generated by anomie were attributed to disease in the patient. The hospital contributed to the very chronicity that fed its growth.
Indeed, so grim was the prognosis for chronic mental illness that psychosurgery, a desperate remedy for a desperate condition, was employed in private, public, VA, and university hospitals. Pressman (40) estimated that between 1936 and 1951, about 20 000 patients underwent lobotomies in spite of the absence of any scientific evidence to support its use. Psychosurgery was largely abandoned in the 1950s; the newly discovered psychotropic drugs were visibly more effective and far less toxic.
Although the psychotropic drugs are commonly credited for the emptying out of state hospitals, that was true only in large, understaffed, and poorly led institutions where patients had been warehoused (41). The philosophy of the open hospital and the provision of services in the community led to much earlier discharge well before the wide availability of drugs in organized and well-managed hospitals (42). However, the effectiveness of psychotropic drugs made it far easier to establish acute psychiatric units in general hospitals and to maintain patients in the community without hospitalization.
Deinstitutionalization was initiated by three factors: a socio-political movement in favor of open hospitals and community mental health services; the advent of psychotropic drugs able to abort psychotic episodes; and a financial imperative to shift costs from state to federal budgets. Failure to track patients after discharge enabled state mental health authorities to declare victory. There was no tabulation of the tens of thousands of elderly patients who were transinstitutionalized from asylums to nursing homes and the thousands of young adult patients who were discharged to homelessness on city streets or to follow-up in local jails.
Discharging chronic patients well before aftercare services were provided offered fiscal relief to the states which had borne the full burden for the mental hospital system since Medicaid, established in 1965, offered matching funds to the states from the federal government. Once patients were discharged, their housing, medical and general welfare costs were jointly shared between federal and state budgets. Specialty psychiatric hospitals were excluded from Medicaid coverage, but nursing homes were not. Thus, there were significant savings if states were able to transfer psychiatric inpatients to nursing homes. As a result, the nursing home population went from 470 000 to 928 000 during the 1960s (2). Medicaid along with Medicare became the largest supporters of the mentally ill in the US without ever being labeled mental health programs. By 1985, nursing homes had more than 600 000 residents diagnosed as mentally ill, largely as the result of Medicaid (43).
Budgetary savings lagged well behind discharge rates. Dismantling existing state hospitals was politically contentious. In rural communities, the state hospital might be the major employer. Though services were, in principle, to follow patients back into the community, chronic hospital attendants, protected by civil service and powerful unions, were unprepared to become community health workers and unwilling to move to new job sites. In consequence, the inpatient census declined far more rapidly in the first several decades than did the number of state mental hospital employees.
Residential treatment beds in state and county mental hospitals declined from 413 000 to 63 000 between 1970 and 1998. Despite a small increase in beds in private psychiatric hospitals (from 14 000 to 34 000) and in general hospital psychiatric beds (from 22 000 to 54 000), the ratio of hospital and residential treatment beds per 100 000 people declined from 264 to 112 while the number of hospital admissions grew from 1 300 000 to 2 300 000 (i.e. from a rate of 644 to 875 per 100 000). Length of stay fell rapidly and first admissions and repeat admissions and outpatient treatment episodes rose (about 2–7 million). For most patients, deinstitutionalization has been an extraordinary benefit, even though many former psychiatric patients have been left homeless and without care.
Costs of care
In the years since World War II the application of new science to medicine led to an exponential increase in the capabilities of the health-care system to diagnose, treat and prevent disease. Along with the new knowledge came medical specialization, an increase in the years of training required to qualify as a specialist, the aging of the population, and a vast increase in the number of health-care workers needed to support the work of physicians. A ratio of one para-medical to one doctor at the beginning of the century had become 10–1 by 1970 and 15–1 by its end. Pari passu, there was an enormous growth in medical care expenditures. The proportion of the gross domestic product consumed by health-care rose from about 4% in the 1940s to 13% in 1999 and to 17.6% in 2009. Health care is currently increasing at 150% of the rate of increase for the GDP (44).
When I was a house officer in 1946, no one talked about costs in teaching hospitals. I cannot recall once being asked not to admit, or to discharge a patient early, because of the costs of care. Quite the opposite! To spare patients’ out-of-pocket costs, we admitted them for diagnostic study because insurance perversely covered tests for hospitalized, but not for ambulatory, patients.
House officers were few and paid hardly at all. The total number of interns and residents (all years) across the United States was just under 16 000 in 1945. By the year 2008, the number had grown to 108 376, some 4769 of them psychiatric trainees (45). As an intern in 1946, I received food, a bed in a shared room, hospital whites and $25 a month for a stipend. The mean salary for US first year residents in 2008 was $47 166 (46). These salaries must be seen in relation to medical school tuition; it was $400 when I was a student at Penn in the 1940s and Penn’s tuition is $35 690 today. Estimated annual expenses for a student at Penn are $67 324. Estimated total expenses at the state school, PSU, are $52 500 for instate residents and $64 000 for out of state residents. US medical graduates carry an average debt load at graduation of $155 000, a major impediment to an academic career. With interest forbearance during residency training and a 10-year repayment plan, young physicians, often starting families, face approximately $2000/month in loan repayment.
The 1940s were anything but a halcyon past in medicine. The system exacted sacrifices from the janitors, floor cleaners, cooks, clerks, technicians, aides, nurses, and others who labored to keep the hospital going but who were not unionized. As house officers, we may have prided ourselves on providing excellent technical care, but we were oblivious to anything wrong with long waits for dispensary patients, limited ward visiting hours (to maximize staff convenience), and inpatient stays prolonged so that we could learn from the course of illness or by pursuing an interesting finding. With shame I confess that I did not challenge that free care patients owed us the obligation to make themselves available for teaching purposes.
As medical capabilities multiplied and as insurance coverage grew, health-care costs rose rapidly. More to the point, reimbursement for hospitals was on a cost-plus basis. The longer the stay, the more numerous the tests and the consultations, the greater the reimbursement. Costs for new hospital buildings and expensive equipment were amortized against per diem charges. Another jump in revenue came with the passage of Medicare and Medicaid in 1965 during the Johnson Administration. The legislation provided major and much needed benefits to elderly and poor Americans. The bills were savagely opposed by the leaders of organized medicine and by some academics. Nonetheless, the legislation proved to be a bonanza for the profession when the White House, in response to the perceived political power of the American Meical Assocation, invited its leaders to help fashion reimbursement regulations. The specification of usual and customary fees and the perpetuation of a cottage-industry approach turned losers into winners for doctors and hospitals; free care, for the poor and the elderly, became paid care. I regret to note that all of us—practitioners, academics, administrators, and hospital trustees—fed the exponential increase in costs.
We created a pool of funds that attracted corporate entrepreneurs obeying Sutton’s Law (Willie Sutton, a career bank robber, was caught and jailed repeatedly. Asked why he continued to rob banks, Willie answered without hesitation: ‘Because that’s where the money is.’). The gold rush was on. It led to what economist Eli Ginzberg termed ‘the monetarization of medicine; (1)’ that is, the penetration of the money economy into all facets of the health-care system because of:
the opportunities created by faulty public policy, primarily through reimbursement for those with money-making proclivities to establish a strong niche in what was formerly a quasi-eleemosynary sector…To secure a long-term financial foundation (for innovation, quality, access and equity at an affordable cost), American medicine will require a combination of political leadership and professional cooperation that is not yet visible on the horizon.
For-profit MCO recognized a prime opportunity to make a killing and moved aggressively into the marketplace. In 1985, 75% of health maintenance organization (HMO) members were in not-for-profit plans; by 1999, the proportion had fallen to one-third (47). This proportion has remained relatively fixed since then with a mere 31% of HMO plans operating as not-for-profit institutions currently. Some pioneering non-profit HMO’s (like HIP, Health Insurance Plan, and HCHP, Harvard Community Health Plan) either had to sell out or change from a staff models to payments per service provided. High quality HMOs (like Kaiser-Permanente) began to experience unsustainable losses and had to reduce staffing patterns. Between 1970 and 1998, the number of health administrators increased more than 24-fold while the number of MDs, registered nurses, and other clinical personnel increased only 2.5-fold. If the US reduced its administrative workforce in health to Canada’s level (on a per capita basis) we would employ 1.4 million fewer managers and clerks (47). MCO chief executive officers rake in huge compensation packages; Health Plan Week reported total 2008 compensation for thirteen MCO executives that ranged from a low of $2.2 million to slightly over $24 million (48).
There is simply no way at all that an academic health science center can maintain excellence in clinical care, serve impecunious patients, teach students and residents, advance the science of medicine, and compete for price with for-profit hospitals that do not teach or do research and are willing to provide care no better than they need to, as long as they can do so at a profit (49). Graduate medical education is centered in fewer than 1500 of the nation’s 7000 hospitals; 100 of them provide almost half of residency training (50). Clinical research in teaching hospitals entails costs beyond those defrayed by grants. Analysis of extramural grant awards reveals an inverse relationship between penetration of the medical market by MCO and the likelihood that medical schools situated within those market areas compete successfully for National Institutes of Health Awards (51). Potential investigators in such schools have less protected time because they are obliged to carry greater patient care responsibilities (52). Costs at academic medical centers are approximately 44% higher than those for non-teaching hospitals because of teaching intensity (53). Without substantial subsidies from an all-payer fund, academic medical centers are non-starters in a competitive medical marketplace.
Some health-care organizations have engaged in clearly illegal, if not criminal, behavior. Columbia/HCA, the largest for-profit hospital chain, had to sack its Chief Executive Officer in response to a federal criminal investigation of its practices (54). That included maintaining two sets of books, one for its own accounting purposes and a second to justify overcharges to the government. It pressured its physicians to invest in its hospitals so they would have a financial stake in referrals; it provided cash bonuses to its executives if they met financial targets (55).
Increasing health-care expenditures are inexorable because of the aging of the population, a labor intensive health sector, and most important of all, technological innovation and the resulting greater capabilities of medicine (56). Examples of highly desirable and very costly developments (none of them available when I was an intern) are heart surgery, renal dialysis, joint replacement, organ transplantation, new imaging methods (computed tomography, magnetic resonance imaging, and positron emission tomography) and more effective, but more expensive, drugs. Americans spent $216.7 billion on prescribed drugs in 2006, a five-fold increase from 1990. Ten per cent of health expenditures were for medication in 2006 with 31% going to hospitals and 21% for physicians. In 2007 3.8 billion prescriptions were filled compared with 2.2 billion a decade before. Average prescription prices increased at twice the rate of inflation. Meanwhile the number of new molecular entities obtaining FDA approval is shrinking. In 2006 Medicare offered a new option, Part D, to cover some of the costs of outpatient medications, but, incredibly, Congress forbade Medicare to directly negotiate prices with manufacturers. Health and Human Service estimates are for a 138% increase in prescription drug spending over the next decade (57). Despite a rapid decrease in the average length of hospital stay (LOS) for a given illness, hospital costs for the care of that illness episode have increased. The cost of each day has gone up faster than the LOS has shrunk! The early days of hospitalization cost the most because of high-tech interventions. It is the relatively inexpensive convalescent days that are being eliminated. As LOS goes down, stress on house officers goes up; new admissions demand much more physician input than recovery days (58).
Why did insurance coverage for in-patient medical treatment exclude psychiatric care? Private insurance was an innovation in the US initiated by a Texas teachers’ union in 1929. At that time, serious mental illness meant custodial hospitalization in state and county mental institutions. Insurance plans guaranteed subscribers access to participating member hospitals; that is, general hospitals. For the most part, those hospitals did not have psychiatric beds; hence, there was no coverage.
In the 1950s, state hospitals were large, mostly custodial institutions. In 1955 there were 340 public psychiatry beds per 100 000 in the population; by 2005 we were down to 17 per 100 000 (59). With the arrival of effective psychopharmacology in the 1960s, psychiatric in-patient stays became brief and many general hospitals (which had unfilled general medical beds) opened psychiatric units. Blue Cross and other insurance plans extended coverage for admission to those units, commonly for up to 30 days (60). Psychiatric hospitals also became eligible for reimbursement and the number of private treatment beds rose steadily from 14 295 in 1970 to 44 871 in 1990 (before falling to 33 635 in 1998) (61). Chains of private psychiatric hospitals proliferated as separate entities were purchased. By the mid-1980s’, their profits had become so enormous that they were touted as an investment opportunity by a leading brokerage firm. One advantage of psychiatric hospitals for investors was that the Diagnosis Related Groups were never able to be applied because of the much greater variability in the duration of stay for an episode of mental vs. medical illness. The psychiatric hospital stay was dependent not only on the nature of the patient’s disorder but also on the availability of alternate living arrangements and appropriate treatment in the community. In an October 1984 advisory to its clients entitled: ‘the psychiatry hospital industry,’ Salomon Brothers, a Wall Street brokerage firm, reported to its clients that: ‘The psychiatric hospital industry is an attractive sub-segment…for investors. In-patient psychiatric care is widely insured, occurs with predictable and increasing incidence and is complex enough to render cost-control efforts difficult…[additional] advantages over general hospitals include the widespread acceptance of two classes of psychiatric care: high quality care in private psychiatric hospitals…versus lower quality care in government-owned mental health centers.’ (Italics added). What enchanted stock brokers was the difficulty in implementing cost-control because of imprecision in diagnosis, ‘the major role of environmental factors,’‘lack of standardized treatment,’ and ‘inability to measure the extent of recovery.’ What was bad for patients was good for investors.
Picture: Leon Eisenberg with his wife, Carola Eisenberg, at the Julius Richmond Symposium on Child Health and Development in the 21st Century, Boston, Massachusetts, September 26, 2006.
In the late 1980s and 1990s, MCO and indemnity insurers began to sharply limit the length of the hospital stay they would reimburse; the 30 day limit we had protested a decade earlier became a nostalgic remembrance of days past. Leading psychiatric hospitals teetered on the balance of bankruptcy. Length of stay data from two excellent, academically affiliated psychiatric hospitals emphasizes the point. In 1986 in hospital A, the average LOS was about 73 days; the number of admissions per year about 1000, and the number of beds 320. By 1992, although average LOS had been cut to 30 days, the institution was still heavily in the red because of unreimbursed days. By 2000, hospital bed capacity had been reduced by half and LOS to 8 days; admissions had increased six-fold (yes, to 6000!). The hospital balance sheet remains at the margins (because third-party payors reimburse at less than full cost) but the major hemorrhage from endowment has been staunched. Data from Hospital B are similar: average LOS 45 days in 1990, 24 days in 1992, and 14 days in 2000. Some patients who need hospitalization are denied it altogether; others are pushed out prematurely because the insurer will not agree to additional days the staff considers necessary. The rules are designed to improve the bottom line rather than to provide optimum care. Meanwhile, general hospitals are converting psychiatric beds into medical-surgical capacity, as reimbursement for these services is significantly higher (62).
The tasks ahead are both organizational and political. Professional organizations tend to conflate the public interest with their professional interest, to function as a guild rather than as an advocate for the public. The challenge is to ally ourselves with other professionals in defense of the health of the public rather than engaging in internecine warfare with psychologists and social workers over hegemony and fees. If we focus on meeting public need, psychiatry will have an honorable place in medicine.
In 1973 (63), I concluded a paper with a bold assertion I now repeat:
Psychiatry at its best is a paradigm for the general medical practice of the future. This may seem an outlandish claim for a field which boasts few spectacular advances. Yet I believe it to be true because psychiatric practice deals with human distress in a context that must include the psychosocial as well as the biological. There are no imperialistic aims behind this claim. Quite to the contrary, in so far as psychiatry is successful in clarifying the psychobiological bases of health and illness, that knowledge will pass into the domain of the generalist and the psychiatrist will join other specialists in the secondary and tertiary cadres of the health system.