One of the most stringent models is the PESI model: Principles of Empirically Supported Interventions (Wampold et al, 2020). The American Psychological Association (APA) also has their own strict standards:
With respect to evaluating research on specific interventions, current APA policy identifies two widely accepted dimensions. As stated in the Criteria for Evaluating Treatment Guidelines (American Psychological Association, 2002),The first dimension is treatment efficacy, the systematic and scientific evaluation of whether a treatment works. The second dimension is clinical utility, the applicability, feasibility, and usefulness of the intervention in the local or specific setting where it is to be offered. This dimension also includes determination of the generalizability of an intervention whose efficacy has been established. (p. 1053)Types of research evidence with regard to intervention research in ascending order as to their contribution to conclusions about efficacy include “clinical opinion, observation, and consensus among recognized experts representing the range of use in the field” (Criterion 2.1); “systematized clinical observation” (Criterion 2.2); and “sophisticated empirical methodologies, including quasi experiments and randomized controlled experiments or their logical equivalents” (Criterion 2.3; American Psychological Association, 2002, p. 1054). Among sophisticated empirical methodologies, “randomized controlled experiments represent a more stringent way to evaluate treatment efficacy because they are the most effective way to rule out threats to internal validity in a single experiment” (American Psychological Association, 2002, p. 1054).Evidence on clinical utility is also crucial. Per established APA policy (American Psychological Association, 2002), at a minimum this includes attention to generality of effects across varying and diverse patients, therapists, settings, and the interaction of these factors; the robustness of treatments across various modes of delivery; the feasibility with which treatments can be delivered to patients in real-world settings; and the costs associated with treatments.
Evidence-based practice requires that psychologists recognize the strengths and limitations of evidence obtained from different types of research. Research has shown that the treatment method (Nathan & Gorman, 2002), the individual psychologist (Wampold, 2001), the treatment relationship (Norcross, 2002), and the patient (Bohart & Tallman, 1999) are all vital contributors to the success of psychological practice. Comprehensive evidence-based practice will consider all of these determinants and their optimal combinations. (APA, 2006)
With that background, a new article in (Frontiers in Psychology for Clinical SettingsCastelnuovo, 2010) questions the validity of these requirements. Part of the problem, however, is that only short-term models are now being tested for efficacy (as a result of the limited # of sessions) and only manualized treatments are being tested because the results have to be replicable for any therapist using the treatment.
The result is that long-term models, more client-based models, and more intuitive (in the moment) approaches are not only failing to be tested, but also being rejected in the evidence-based models.
However, a lot of therapists acknowledge the empirically supported treatments, but use whatever they have found to work, no matter if it has been validated or not.
If you want to read the whole article, here is the Provisional PDF.The field of research and practice in psychotherapy has been deeply influenced by two different approaches: the empirically supported treatments (ESTs) movement, linked with the evidence-based medicine (EBM) perspective and the “Common Factors” approach, typically connected with the “Dodo Bird Verdict”. About the first perspective, since 1998 a list of ESTs has been established in mental health field. Criterions for “well-established” and “probably efficacious” treatments have arisen. The development of these kinds of paradigms was motivated by the emergence of a “managerial” approach and related systems for remuneration also for mental health providers and for insurance companies. In this article ESTs will be presented underlining also some possible criticisms. Finally complementary approaches, that could add different evidence in the psychotherapy research in comparison with traditional EBM approach, are presented.
Keywords: evidence based medicine, empirically supported treatments, psychotherapy research, common factors, efficacy, evidence based in clinical psychology, EBM, ESTCitation: Castelnuovo G (2010) Empirically supported treatments in psychotherapy: towards an evidence-based or evidence-biased psychology in clinical settings?. Front. Psychology 1:27. doi:10.3389/fpsyg.2010.00027
Copyright: © 2010 Castelnuovo. This is an open-access article subject to an exclusive license agreement between the authors and the Frontiers Research Foundation, which permits unrestricted use, distribution, and reproduction in any medium, provided the original authors and source are credited.
References:
APA. (2006). Evidence-based practice in psychology. American Psychologist, 61(4), 271-285. doi:10.1037/0003-066X.61.4.271
Castelnuovo G (2010) Empirically supported treatments in psychotherapy: towards an evidence-based or evidence-biased psychology in clinical settings?. Front. Psychology 1:27. doi:10.3389/fpsyg.2010.00027
Wampold BE, Lichtenberg JW, Waehler CA. Principles of empirically supported interventions in counseling psychology. Counseling Psychologist. 2002;30:197-217.
No comments:
Post a Comment