Skip to content

What if there are no Research-Supported Treatments?

September 30, 2010

I want to go back to a question raised by Faith. What should a therapist do if there are no empirically (research) supported treatments for a particular type of problem?

First of all, in the case of attachment disorders, I would have to question this, since many children appeared to be getting misdiagnosed on the basis of symptoms are not part of the DSM definition of Reactive Attachment Disorder. The chief complaint most parents have about these so-called “RAD kids” is behavior problems and there are many well-supported treatments for behavior problems, even serious ones, as I demonstrated in a previous posting which links to a podcast that provides a detailed description.

That being said, I will turn to the question of what should one do if there really aren’t any research-supported treatments? There are several principles the mental health professional needs to keep in mind.

First and formost is the principle, primum non nocere, first do no harm.  If there is no evidence, then at least use your common sense and do not do any treatments that appear to be risking harm or contain elements such as prone restraints that have actually been shown through research to be harmful. Sometimes the possibility of harm is not so obvious, but at least rule out treatments that do, on the face of things, appear to be harmful.

Next is the very important principle of informed consent. Informed consent means:

  • Accurately informing the client of what evidence supports the treatment being proposed. If there are no studies published in peer reviewed journals, the client needs to be informed of this and what is being done has to be clearly labeled as experimental.
  • Not making any kind of unsupported claims about success rates, percentage improvement or anything like that. If the claims are only based on clinical experience and anecdotes, they are not valid because we have no way of knowing, if not systematically studied, how many failures there actually were. Human beings have a a tendency to engage in confirmation bias, and that includes human beings who happen to be therapists. As Paul Meehl pointed out, it is highly arrogant for any therapist to think that he or she is above being subject to confirmation bias. What this means is that there is a tendency to focus on the successes of a favored treatment and ignore or explain away failures. To truly know a treatment’s success rate, well designed, randomized controlled studies are needed. These days, to be recognized, such studies need to be pre-registered at clinicaltrials.gov to make sure all data are present and accounted for.  Well designed studies also account for people who drop out of therapy, who normally wouldn’t be counted. If a therapist is making a claim about a success rate and has no peer reviewed published data from randomized clinical trials to back it up, it’s time to head for the hills, in my opinion.
  • Informing the client of other treatment options that are available and their evidence. This would include other opinions about diagnosis.

When trying a treatment with a client that does not have such evidence, it is very important to carefully monitor the client’s progress. This is important even with research supported treatments but especially important if there is little or no research support. The client’s progress should be measured using well-validated assessment tools, not something like the RAD-Q which according to the published literature is not well-validated.

If, while carefully monitoring the client, the client is making improvements, then continue. If, however, there is no change or the client is deteriorating (getting worse) when using an unvalidated treatment, then the therapist should immediately stop. Again, this is common sense. If what you are doing is not working, try something else. Do not continue doing something that is not helping the client. It doesn’t take a genius to realize this, but as Albert Einstein says:

“Insanity: doing the same thing over and over again and expecting different results.”

It is especially important not to fall for the claim that someone needs to get worse before they get better. If a treatment hasn’t undergone proper testing, this is a very risky proposition.

For example, on the BBC program, Taming the Problem Child that featured Ronald Federici’s intervention, the following was reported. Here is a fair use quote from the transcript:

JULIE KEW: Since she’s been back at school we still get the reports back that she’s hit somebody, she’s pushed somebody and that she’s bitten a child. I thought that maybe that she’d learnt, but obviously she hasn’t.

RON FEDERICI: Failures happen all the time. People go back to level 1 all the time if they violate the major rules which are resurgence of aggression, violence, lying, cheating and manipulation, because that gives the parents the message that the child needs even more time with them to help break down further barriers which have been left untouched because that means the child still has some deeper layers of problems.

NARRATOR: After 3 months Sergei’s parents report that he seems to have deteriorated and that his behaviour is almost as bad as before. they are continuing with the treatment. However, what concerns critics is that there have been no control trials to measure independently the effect of the treatment.

PETER FONAGY: Because it’s such an unusual intervention I would really want to know in a properly conducted randomised control trial that the treatment is (a) safe and (b) effective in the long run.

I concur with Dr. Fonagy. In my opinion, it is especially important and it greatly concerns me that a treatment would be recommended to be continued, even if good progress is not being made.

To sum it up, if there are no empirically supported treatments for a particular type of problem, first of all, get a second or even third opinion about a diagnosis from other therapists who are not connected with your original therapist. Next, select treatments that appear on the face of things to do no harm. Choose a treatment that appears to have worked well with others and then carefully monitor that particular client using well validated assessment tools. If the client is not making progress or appears to be deteriorating, if what you are doing is not working, then stop doing what is not working and try something else. Do not just do more of what has not been working and expect different results.

An example of this is the serious deterioration that occurred during therapies that were administered during the 1980s and 90s (and still are being given by a minority of therapists, even today) for memory recovery of childhood sexual abuse and supposed multiple personalities, now called Dissociative Identity Disorder or DID. Many people who came in for treatment, after undergoing years and years of such therapies, seriously deteriorated to the point where they had to be hospitalized and yet were told they had to get worse before they got better and as a result, were seriously damaged by the therapy and ended up much worse because of it. In some of these cases, false memories of horrendous abuse was brought up and some people were told that they had literally hundreds of multiple personalities. Therapy only stopped when their insurance ran out and then, after having some time away from the iatrogenic therapy, some of these clients realized how damaging the therapy was and sued their therapists and won multi-million dollar judgments, such as Nadean Cool who won a $2.4 million settlement against her therapist.

How much have we learned from these past mistakes? Some therapists appear to have not learned at all and have continued doing interventions that are untested and may cause harm. Because of this, it is the clients, the therapy consumers who have to take their power back and actively question anyone you are thinking of hiring as a therapist. Most people ask more questions when buying a car, than they do when hiring a therapist. It is high time this changes.

2 Comments
  1. Dr. Cathleen Mann permalink

    The underlying problems that cause this kind of thinking in mental health, psychology, psychiatry, social work, whatever are complicated in origin. A lot of it has to do with training in graduate school, but this is not the whole problem. In counseling programs, there is not enough rigor to look at evidence based practices in psychology. Counseling and psychology are not the same thing. So what happens in most graduate programs training counselors is that they are taught all kinds of clinical lore, and it is presented as truth. I can think of one glaring example. I was taught in my counseling (master’s) program that all fat women had been sexually abused. Always. Think about that statement for a minute. How pretentious. I also think another glaring problem is that many people graduate from school and think they never have to learn another thing. We have counselors, even licensed ones, and psychologists and psychiatrists — take you pick — that haven’t read anything in 20 years. They are still relying on old, outdated theories…or worse yet, they’ve made up theories of their own (look for the code buzz word here: eclectic), and they just do whatever they want because they think they’ve arrived. The mental health field is very broken.

    • How well I remember my suspicion when I saw “eclectic” mentioned as a fourth therapy in the World Book Encyclopaedia: after psychodynamic, behavioural and humanistic.

      I was 15.

      I do accept eclectism in philosophy, but not in psychiatry or psychology.

      Do remember hearing the “fat women are abused” percolating around.

      Both counselling and psychology try to deal with the normal mind having problems in the world.

      Counselling can be more practical and it is usually short.

      With the Kew-Federici-Fonagy dialogue, what it does is show an example of how an unhealthy attitude towards slips and relapses can impede progress.

Leave a comment