Claims about DID Therapy: The Need for Clear, Transparent Descriptions of Treatments
Continuing on, from my previous posting, I would like to elaborate on one of most important points made in my criticism of the meta-analysis on DID therapy that was referred to by DID therapy proponent and researcher Bethany Brand, who appeared as a guest on a recent NPR program and made some big claims about therapy for DID. The lack of a clear, transparent description of what was being provided to the clients in the studies in the name of DID therapy presents a major problem. After I heard her refer to this paper, I did a search and obtained a copy of the actual article, to see how, exactly this meta-analysis was conducted. It is always a good idea to check out primary source documents such as this, rather than simply accept, at face value, what is claimed in the media. In the article, Brand admits that “In general, descriptions of the treatments provided in these outcome studies are sparse.” and goes on to note that “Treatment modalities included Cognitive Behavioral Therapy, art therapy, hypnosis, psychoeducation, and experiential therapy” (p. 647). These are each, huge categories of treatments that could contain any number of different treatments. It is common for people to claim they are doing CBT, but not be doing any of the CBT treatments that have been shown through studies to be effective and instead, be doing some treatment of their own that they classify as CBT. “Experiential” therapy could mean many different things and according to Scott Lilienfeld’s published analysis of potentially harmful therapies, some experiential therapy can even do more harm than good, while other forms of experiential therapy might actually be helpful and at least worthy of further study.
What was done in these studies of DID therapy? We simply do not know. The point is, from such vague descriptions, there is no way for us to know what was done, but if it was in any way similar to past “experiential” DID therapies that were the target of lawsuits during the 1990s, there is cause for concern. At this point, we do not know and it is important to find out before signing up for such therapy.
The bottom line is, we really have no idea what treatments were provided in the studies included in this meta-analysis other than they addressed patients with DID. This is not useful information for a clinician to have at all because, if the results were positive, it provides no guidance as to what to do with clients. My biggest fear and concern is that therapists who do DID therapy will wave this sort of article in front of the clients and claim that what they do is “empirically supported” when there is no way to know what was even done in the studies reviewed in that article, not to mention the fact that no conclusions about efficacy can be drawn from the uncontrolled studies presented in the article.
Current reporting guidelines call for a clear, transparent description of the treatment(s) being delivered. This is important because 1) without such a description, it is not possible to reproduce (replicate) the studies and 2) it is impossible to tell what exactly was being studied and shown to have a positive effect (or whatever the results were). The vague description provided to readers by Dr. Brand and her colleagues, leaves the door wide open for any DID therapist to claim that DID therapy has been shown in a published meta-analysis to be effective. Another important problem was that the design of the uncontrolled studies would not allow such a claim, but even if they did, such a claim could not be made, because we don’t know what the therapist did. It is highly doubtful that all people who treat DID do the same thing. In fact, according to the authors, this varies greatly from therapist to therapist. For this reason, it is even more important that in each study, what was done 1) be clearly described, in detail and 2) systematic checks be performed to be sure that the protocol was carried out, as described. This is called a fidelity check and there are systematic ways in which these need to be done.
Ideally, the sessions in the studies should be videotaped so readers (including other professionals and therapy consumers) know exactly what was done in the study. That way, when a person enters therapy with a provider who gives a diagnosis of DID and offers DID therapy, full informed consent can be provided to the client. This means that the client would be told, exactly what specific treatments have been studied. That way, if the provider does something other than what was studied, the client would immediately know that this is a departure from what was in the study.
Currently, we are not in this situation, however, because nothing unique to DID therapy has even come close to meeting the criteria for an empirically supported treatment. If researchers such as Dr. Brand are interested in studying it, however, they need to begin by very clear, detailed and transparent descriptions of what was done, have the sessions video taped and have them formally evaluated for fidelity.
That would also prevent therapists from claiming to do something that sounds legitimate such as CBT or mindfulness interventions and then, behind the closed doors of the therapy session, do something completely different and more controversial. I bring this up because former patients who feel they have been harmed have mentioned this happening. A therapist will advertise doing one thing, and then do another in the sessions.
Published studies with clear descriptions and fidelity checks, accompanied by full informed consent being provided to clients by practitioner can go a long way toward preventing this from happening. The take away lesson for therapy consumers is to demand full, written and verbal informed consent from a provider, including the evidence for what specifically is being offered, other options that exist and their evidence, and the risks and benefits of the therapy being offered. Until we have more stringent guidelines requiring this, that get enforced by state boards, we will continue to have people who are being harmed in the name of “therapy”.
Incidentally, another puzzling contradiction in this article is at the end, the authors state that randomized controlled trials (RCTs) would be impossible or very difficult to do, due to the long length of the treatment. However, the treatments included in the meta-analysis were reported to range from only 4 to 20 outpatient sessions or 1 to 3 weeks for inpatient treatments. It would not be difficult to do RCTs on such treatments. As Bruce Thyer and I pointed out in our recently published special issue of the Clinical Social Work Journal that I referred to in an earlier posting this month, researchers are finding ways to do controlled studies, even on long term psychodynamic psychotherapy, hence the length of therapy should, in the future, be less of an excuse not to do them.
Once again, the bottom line is, in my opinion, DID therapy proponents and researchers have no business reporting to the media that there is solid science behind what they offer. Unfortunately, most people are not able to access, as I did, the primary sources. That is why I do what I do, in efforts to educate therapy consumers on points of view that they may not hear from proponents of such therapies.
Even worse, Dr. Brand, in the NPR interview, claimed that these publications refute the notion that DID therapy is harmful when they can do no such thing. The reason is that with the high dropout rates and the dropouts not being evaluated in any way, it is impossible to tell what impact the therapy had on them. Harmful therapy doesn’t necessarily mean that the therapy harmed everyone or even the majority. It just means that there are a subset of people who end up being worse off, after the therapy than they were before. Think about it this way. Most people who drive drunk do not get killed in traffic accidents. However, a subset of people do and that means that we consider driving drunk to be a harmful practice, regardless of the fact that most people who drive drunk do not die or kill others as a result. If a practice has the potential to do that kind of harm, it is a practice that needs to be avoided and we cannot know at this point in time whether people are better off, worse off or do not change as a result of therapy being provided for DID.
All of this is completely aside from the fact that the DID diagnosis, itself, has been called into question and is highly controversial and is, at best, over diagnosed.