Entry for 1 April 2008:
Reliable change is defined as change beyond the error of measurement on the instrument being used to assess outcome. It is calculated by a formula popularized by Neil Jacobson in the 1980’s, and based on the outcome measure’s reliability and standard deviation. This yields a Reliable Change Index (RCI), which tells how much client change on a measure is enough to conclude that the client has really changed.
The trickiest thing about calculating RCI is figuring out which estimate of reliability to use.
1. First, it should be reliability that is relevant to the question addressed, i.e., change across time, that is test-retest reliability. Internal reliabilities (among items, commonly calculated using Cronbach alphas) are sometimes used, but this is cheating, especially given that internal reliabilities are usually larger.
2. This leaves open the question of what kind of test-reliability estimate is best to use. Pre-post correlations comparing client pre-therapy scores to their post-therapy scores aren’t ideal because the whole point of therapy is to help the client to change their normal trajectory. If therapy were perfect in addressing client problems, highly distressed clients would change more and less distressed clients would change less (simply because they have less room to change); this would yield a zero pre-post correlation. Therefore, change over a period outside of therapy is generally preferred, i.e., the correlation between client intake and client pre-therapy scores (there is usually a delay between intake and starting therapy).
3. Of course, all of this leaves aside the very really possibility that clients differ in the consistency of their scores over time, with some clients being more consistent than others. It is perfectly obvious from tracking client weekly change that clients vary widely in this: Some clients show a consistent downward slope, while others’ scores bounce up and down wildly. Using weekly change measures, such as the Personal Questionnaire, provides an interesting alternative strategy for estimating temporal consistency: time series analysis or ARIMA (i.e., Autoregressive Integrated Moving Average; aren't you sorry you asked...) modelling, a method developed by economists to model changes in unemployment and so on. Here, we correlate each week’s score with that of the next week: time t with time t + 1 (i.e., session 1 with session 2, session 2 with session 3 and so on). This yields a what is called an autocorrelation (a correlation of a variable with itself, displaced in time).
In time series analysis, the non-random components of the time series are measured and modelled in order to carry out proper significance tests. These components include general change over time (referred to as “secular trend”), autocorrelation (after removing secular trend, if present, by calculating differences between successive scores), and autocorrelated error (carry-over of variability from one time to the next). This is one of the most technical, obscure research methods I know (and I’ve learned some doozies in my time…) However, for estimating consistency of weekly scores, I'm thinking a simple autocorrelation should do the trick.
As long as a client has at least 10 sessions or so (20+ is ideal), using autocorrelations would allow us to estimate RCI values on an individualized basis. For example, client PE-111, a client with a bridge phobia who I saw in the CSEP-1 (Centre for the Study of Experiential Psychotherapies Study 1) project, had a session-to-session (lag 1) autocorrelation of .37, and a standard deviation of .34. both values fairly low, but consistent with the high, fairly stable scores he showed over most of his therapy. These values yield RCI values of .75 points (on a 7-point scale) at a probability of p < .05 and of .5 points at a probability of p < .2. This means that this client would have to show at least half a point of change for us to be reasonably confident that he’d shown change over time (including from week-to-week), or three-quarters of a point for us to be almost certain that he had shown change. By contrast, for the client I am currently seeing our social anxiety research protocol, with a similar test-retest reliability and a standard deviation twice as large, the corresponding values are correspondingly larger: .99 and 1.50.
This is another example of how our conventional ways of calculating statistics are based on general assumptions that make relatively little sense when applied to individual clients, and point to the importance of individualized research methods.
No comments:
Post a Comment