Entry for 18 November 2007:
The past week has been one of my busiest. Beth and I had to present the roll-out of the 2007 Person-Centred/Experiential therapy meta-analysis on Thursday at Scottish SPR. We were analysing studies up until 9pm on Wednesday, with Beth sending me analyses after that as she finished them and as I gradually worked my way through the powerpoint slides. Although the new data set was much smaller than the previous one (29 vs. 112 studies), the total number of clients is large and most of the results were remarkably consistent: Pre-post effects pretty much the same; controlled effects a bit larger, if anything; comparative effects still hovering around zero.
Somewhere around midnight, however, we discovered that the researcher allegiance effect had disappeared. (This is the correlation between research pro vs. con theoretical allegiance and effect size.) This had been a regular feature of the last 3 iterations of the analysis, even getting larger with time; and of course it’s a general finding in the larger psychotherapy outcome literature (e.g., Luborsky et al., 1999). Now, instead of a correlation coefficient of -.59 (p < .05), it was a measly -.26, which is not even close to statistical significance with n = 29. Without a significant allegiance effect, there is no justification for controlling for it. I was concerned but also amused. Researcher allegiance is an important part of our overall analysis of the why CBT sometimes appears to be superior to person-centred therapy; but I always get a kick out of it when results surprise me. It reinforces my belief in research and science in general. What’s the point of doing science if you don’t get surprised some of the time?
Mick was not amused when I told him the next day, insisting that we keep trying until we found an allegiance effect. Apparently, he has a strong researcher allegiance to researcher allegiance effects! But the fact is that Beth and I don’t trust the finding either, and will feel much better once we have included the other 20+ studies still out there waiting for us to get to them. Statistical power isn’t really enough here and we haven’t yet had a look round for outliers. Stay tuned for further developments…
Reference: Luborsky, L., Diguer, L., Seligman, D.A., Rosenthal, R., Krause, E.D., Johnson, S., Halperin, G., Bishop, M., Berman, J.S., & Schweizer, E. (1999). The researcher’s own therapy allegiances: A “wild card” in comparisons of treatment efficacy. Clinical Psychology,: Science and Practice, 6, 95- 106.
No comments:
Post a Comment