Several years ago, Ken Pope, prolific sharer of psychological information, posted excerpts from a paper by Paul Wachtel, a very bright man and author of Psychoanalysis and Behavior Therapy (1977) and many other facscinating books. Given my friend Doug’s tendency to be more interested in “helping” than promoting himself – witness the preceeding unpublished questionnaires – I think it might be instructive to re-post it here.
From: “Ken Pope” <email@example.com>
To: “Ken Pope” <firstname.lastname@example.org>
Subject: RECOMMENDED: Evidence-Based Practice’s Flawed Assumptions (Paul Wachtel)
Date: October 4, 2010 5:27 PM
*Psychoanalytic Psychology* (vol. 27, #3) includes an article: “BEYOND ‘ESTs’: Problematic Assumptions in the Pursuit of Evidence-Based Practice.”
The author is Paul L. Wachtel, PhD.
Here’s how the article begins:
Increasingly, in recent years, there have been calls for establishing the practice of psychotherapy on an evidence-based foundation. In principle, this is a salutary development. Unfortunately, however, the “empirically supported treatments” (EST)movement, which has largely dominated discussion of evidence-based practice in recent years, has been characterized by a set of assumptions that impede sound understanding of the sources of therapeutic change and generate biased conclusions regarding what therapeutic approaches are actually helpful to patients.
I aim in this paper to examine closely these “EST” assumptions and to indicate an alternative view of how clinical practice can be rooted in respect for evidence.
The reader will notice that I have placed the terms “empirically supported” and “EST” in quotation marks. I do so throughout this article because I do not wish to further contribute to the misconceptions that result when the concepts of empirical validation or empirical support are ceded to the advocates of a particular tendentious definition of those ideas.
It reflects a problematic acceptance of faulty premises when critics of this parochial methodology say things like “it is important that training programs teach therapeutic approaches other than ESTs as well.”
Such statements seem to accept the idea that “ESTs” are the only therapies that are empirically supported, and then try to battle for some space for therapies that are not “ESTs,” as if those other therapies, though not empirically supported, have some other virtue.
In fact, as I shall argue in this article, there are serious flaws in the empirical support for many “ESTs” as therapies applicable to the majority of patients who seek therapy and, conversely, there is often evidence at least as strong or stronger supporting therapeutic approaches not on the “EST” lists that have been promulgated (see, e.g., Shedler, 2010).
One good indicator of the conceptual confusion and ideological scrambling that has characterized much of the literature on the empirical foundations of therapeutic practice is the shifting vocabulary that has characterized the debate.
In 1995, the Task Force on Promotion and Dissemination of Psychological Procedures, a group originating in the clinical psychology division of the American Psychological Association (APA), published a list of treatments that were deemed to be “well-established” or “probably efficacious” (Task Force on Promotion and Dissemination of Psychological Procedures, 1995).
In the relatively few years since this list appeared, the nom de guerre of the movement created by these activists has mutated with some regularity. The first shift in the rhetoric was from “well established” or “probably efficacious” to “empirically validated.” Before long, however, this terminology too gave way, under pressure from critics who noted that it was not consistent with a genuinely scientific attitude to claim that the approaches on the list had been “validated” when the cumulative findings of research over time so often lead us to modify our initial enthusiasms.
Thus, a new version of the list then appeared–though with little change in the criteria for inclusion or exclusion– under the name “empirically supported” treatments.
Presently, that terminology too is in the process of being jettisoned, with “evidence-based practice” the rhetoric du jour.
In considering where the criteria advocated by the various task forces and committees fall considerably short of adequate science, there are at least four features that require our attention.
I will discuss in succession the emphasis on patients in a study being limited to a single diagnostic category, on manualization, and on randomized, controlled trials (RCTs).
After explicating the ways in which each of these three criteria can be misused to both restrict and misrepresent the available evidence, I will then turn to a fourth characteristic of the movement I have been discussing–the dichotomous thinking that leads to the promulgation of lists of treatments that are empirically validated and to a clearly implied shadow list of those that purportedly are not.
In an interesting irony–because many of the leading “EST” advocates represent therapeutic orientations that originated as a challenge to the purported “medical model” of psychoanalysis–“EST” advocates insist that all valid conclusions about the efficacy of any particular therapeutic approach must rest on a methodology that essentially mimics the structure of drug trials in medical research.
However, they do so without considering sufficiently what makes the RCT methodology appropriate to that realm of investigation.
The use of RCTs in drug trials almost always also includes, as an essential element, the employment of a double-blind methodology.
Neither the patient nor the doctor administering the medication knows whether any particular patient is receiving the medication under investigation or a placebo coated to be identical in appearance.
Indeed, when the side effects of the active medication are such that they are readily detected by either party, the internal validity of the study is seriously compromised. In contrast, in studies of psychotherapy, no one is unaware of which treatment is being offered or received.
Without this crucial feature of the drug studies that the “EST” methodology attempts to mimic, the “gold standard” looks more like painted tin.
The absence of (and indeed, in most cases, the virtual impossibility of) a double-blind methodology in psychotherapy outcome studies is probably one important factor contributing to the finding by Luborsky et al. (1999) that most studies end up demonstrating the superiority of whatever approach the investigator is most closely allied to.
The knowledge, not only by the investigator but by the therapist, of which procedure is being administered introduces powerful–and impossible to measure–influences on what actually transpires in the room when the therapist is practicing one approach or another.
A…feature of the standard “EST” criteria is perhaps even more problematic–the requirement that the treatment be manualized.
Here again, proponents of these criteria offer a reasonable sounding Rationale — to evaluate whether the treatment being investigated in any particular study was effective, we need to know what the actual treatment was.
But here again, what has emerged has been a tendentiously conceived and extraordinarily narrow investigative strategy. Instead of being viewed as one of a variety of possible solutions to the scientific challenge of specifying the actual therapy employed in a study, manualization has increasingly become a requirement by granting agencies for funding research, and training in “manualized treatment” has become widely (and falsely) equated with training in therapeutic approaches that are based upon solid and reliable evidence.
Most problematically, the question-begging logic of the “EST” paradigm essentially implies that a nonmanualized therapy cannot by definition be empirically validated or supported since (with very limited and grudging exceptions) manualization has been treated as a fundamental requirement for empirical support per se.
This is not a championing of science; it is an abdication of science, a decision not to investigate nonmanualized treatments that bespeaks at best a poverty of imagination in addressing methodological challenges.
It might be objected that the “EST” paradigm does not strictly require a manual.
Chambless and Ollendick (2001), for example, in an influential statement of the “EST” approach, depict as the requirement “treatment manuals or their equivalent in the form of a clear description of the treatment.
Now, I am as in favor as they are of clear description, but this seemingly more open and reasonable statement is not consistent with the actual history of the “EST” movement, whose proponents have consistently dismissed an enormous body of evidence supporting the therapeutic impact of treatments other than those on the “EST” lists.
This dismissive approach to uncongenial data was evident as early as the original Division 12 Task Force on Promotion and Dissemination of Psychological Procedures (1995).
In their first published statement, attempting to discredit the influential review by Smith, Glass, and Miller (1980), which they noted had “convinced many that substantial evidence demonstrated the efficacy of psychosocial treatments,” the task force publication stated, “Finally, and *perhaps most important*, the studies in the Smith et al. review predated the standardization of treatments in research studies *through the use of treatment manuals*” (p. 3, italics added).
The two italicized phrases reveal the degree to which manuals were made the linchpin of an effort to prescribe one and only one methodology for psychotherapy outcome research and to dismiss or ignore evidence gathered in other ways, however careful, methodologically sophisticated, and appropriate to the problem at hand.
The second Division 12 Task Force (Chambless et al., 1996) states quite explicitly that manualization was virtually a sine qua non for them to regard a treatment as empirically validated, with only “specific and *rare* exceptions” (p. 6, italics added).
My point is not that the creation of therapy manuals is never appropriate or useful.
A considerable range of therapies (including some fairly complex psychodynamic and humanistic approaches) have been manualized for research purposes, and I do not mean to deride the efforts of these investigators.
Rather my point is that to make manualization a requirement for regarding a treatment approach as evidence-based is not a reflection of commitment to scientific rigor, but a political ploy that effectively excludes from the lists of evidence-based treatments a variety of treatments for which there is in fact a very substantial body of evidence (see below), but which do not happen to have approached the task of empirical validation via the particular investigative strategies that the “EST” movement advocates.
Part of their Walmart approach to mental health care is that “ESTs” are cheap because “many of these interventions can be disseminated without highly trained and expensive personnel.”
Turning specifically to CBT, which is clearly the approach with which they are strongly identified, they state, as a virtue, that “CBT is effective even when delivered by nondoctoral therapists or by health educators with little or no prior experience with CBT who received only a modest level of training in that technique” (p. 38).
In this they give short shrift to the large body of research attesting to the importance of the therapeutic relationship and to the skillfulness of the therapist, especially in the treatment of more difficult cases (e.g., Beutler et al., 2006; Gilbert & Leahy, 2007; Hofman & Weinberger, 2007; Norcross, 2002; Wampold, 2008).
One must wonder if, in their own lives, they really would as readily entrust their troubled adolescent or their own struggles with relationship problems or with feelings of meaningfulness or satisfaction in life to a bachelors level therapist with a week’s experience.
The author note provides the following contact info: Paul L. Wachtel, PhD, Department of Psychology, City College of New York, 138th Street and Convent Avenue, New York, NY 10031. E-mail: <email@example.com>.