In 1984-85, as I have written previously, we spent a stimulating year in Sheffield, England, working with David Shapiro and his team. Two members of this team were Gillian Hardy and Michael Barkham. During that year, we worked on several projects, the most important of which was a Comprehensive Process Analysis of insight events in cognitive-behavioural and psychodynamic-interpersonal therapies. Several years later, David and the team moved to the University of Leeds, and eventually dispersed, with Gill returning to Sheffield to teach on the new Clinical Psychology course, while Michael stayed on at Leeds and David finally took early retirement. Eventually both Michael and Gillian got themselves promoted to Professor at their respective universities.
Now, however, Michael has taken a Professorship at the University of Sheffield as part of a scheme to head up the new Centre for Psychological Services Research, creating a collaboration with Gill and Glenys Parry, another 1980’s Sheffield alum, plus John Brazier, a health economist, creating a collaboration between the School of Health and Related Related (SHARR) and Psychology. The idea of the new centre is a timely and important one: To integrate health services research (whose major source disciplines are epidemiology, economics, public health) with psychotherapy research (source discipline: psychology).
Yesterday (Friday) the four of them put on a one-day conference to mark the opening the new centre. There various speeches by university and NHS dignitaries, and the four and others associated with the centre presented their vision for it, including a moving music/video presentation by a service user (“mental health consumer” in the US) named Julie Coleman.
Bruce Wampold then gave a comprehensive keynote address reviewing what we know from psychotherapy research about what affects outcome. Mike Lambert has offered a widely cited break-down of the variance accounting for therapy outcome (e.g., 30% due to therapeutic relationship; 15% due to technique/model), but I found Bruce’s analysis more empirically grounded and useful, in descending order of size:
1. Client pre-therapy status on whatever the outcome measure is: 40 – 50% of the variance on that measure. This is by far the best predictor. The variability in this and the other estimates depends largely on the type of measure being used.The remaining 19 – 36 % of the variance? Bruce didn’t say, but there may be additional client pre-therapy characteristics, such as problem chronicity or duration (as hypothesized by David Clarke in his attack on the new centre team’s most recent CORE analysis, Stiles et al., 2007) in the latest issue of Psychological Medicine). Most likely, the rest is “error”, or, as I have long suspected, mysterious, chaotic 10-way interaction effects, essentially indistinguishable from error.
2. Getting therapy (vs. not getting therapy): about 13% of the variance
3. Therapeutic alliance: 5 – 9% of the variance (about half of this attributable to the therapist).
4. Therapist: 5 – 8 % of the variance. (This partly overlaps with alliance.)
5. Type of therapy: at most, 1% of the variance.
Bruce’s own research sometimes gets quite technical, for example using multiple-level hierarchical modelling. This work makes me somewhat nervous, partly because I find it difficult to follow, but also because I have observed that the more complex the statistical methods used, the more assumptions are made and the harder it is explain the results to practitioners. I prefer as a matter of principle to use simpler statistics wherever possible; however, sometimes it is not possible…
After lunch it was the turn the turn of various outside collaborators of the centre, mostly “macro” folks, who talked about health economics and a large scale community psychology type intervention (Dave Richards on an evaluation of para-professional helpers delivering a telephone-based guided self-help program for depression). After three of these presentations, I felt a bit on foreign territory as I began my presentation, especially after Dave Richards’ remarks about not wanting to get lost in “minutiae”. Nevertheless, I launched into an updated version of my SPR-Wisconsin paper on Change Process Research genres (see Blog entry for 23 June 2007), building on Gill’s presentation from the morning. In the end, the presentation seemed to go over pretty well, although I was uncharacteristically nervous and given that it was near the end of a long day. Several people told me afterwards over tea that they appreciated my comments on problems with the poor quality of much of the qualitative research being produced today and the need for greater variety in qualitative research data collection and analysis methods.
Finally, Tony Roth, whom I had met at the BACP research conference last May, made brief comments and chaired an open discussion and question-answer session. The panellists were asked to identify themes or general impressions of the day. Here are my main thoughts:
1. I learned a lot about contextual or “macro” stuff that I didn’t know about. For example, the health economics concept of Quality Adjusted Life Years (QALYs) for estimating benefits of therapy. (You use a quality of life weight which varies from 1 = perfect health to 0 = so miserable that the person would just as soon be dead; this gets multiplied by the number of years, e.g., that a benefit of therapy might be experienced over).
2. There was a general consensus in favour of theoretical, disciplinary and methodological pluralism. Therapeutic mono-culture was universally decried; multiple methods was the order of the day. This was unsurprisingly, given the nature of the occasion and the centre, but it was still encouraging and refreshing, and led to an absence of academic oneupsmanship and a generally collegial conversation.
The last question from the audience was, “How do you think the £170 million pounds allocated last week under the Leyard initiative to promoting access to mental health treatment should be spent?” Everyone knows that it is ear-marked for providing CBT for people whose anxiety/depression are keeping them on disability and unemployment roles. The health economists among us had explained how poorly thought out this initiative is (apparently the economic modelling was done by Lord Leyard on the back of an envelope). I have been waiting for someone to ask me this question since I first heard about the initiative 18 months ago, so I piped up, “What I think the money should really be used for is doing the hard research into the factors that keep people on disability and unemployed, that is, the culture of poverty and deprivation that is behind their employment problems, and what people need to move out of that. Otherwise, as a friend of mine [John McLeod] said, we might want to start planning research proposals for on these problems for 5 to 10 years form now, when the initiative fails!”