1. After several years of work and a lot of revisions, our paper on next-generation analyses of a common measure of psychological distress has finally been published, in Psychological Assessment, the most prestigious psychological measurement journal. We undertook this project in order to show researchers in clinical and counselling psychology the analytic power and usefulness of these new methods of analysis. These methods are common in educational testing but have not yet penetrated into clinical/counselling psychology. Clinical psychology, in particular, tends to be fairly conservative methodologically, and so there has been some resistance to this methods, including in my former department.
2. Personal experience. This was one of the most challenging publication processes I have been through in my career, and probably the most technical, on multiple levels: analysis procedures, organizing and presenting complex material in a clear, simple manner, dealing with large numbers of obscure technical issues raised by the reviewers, and finally at the last minute working around excessive copyright restrictions. It would have been impossible without the group of co-authors, especially my Rasch analysis colleagues from the University of Toledo School of Education: Christine Fox, Gregory Stone & Svetlana Beltyukova. They are a great group of people, smart, fun, unpretentious, supportive and passionate about psychological measurement. Working with them provided a safe haven during a difficult time for me professionally, and I am deeply grateful to them. Two of our graduate students, Jen Anderson and Xi Zhang also provided invaluable help searching and analyzing literature and helping with parts of the write up.
So here it is, in print, and it is a piece that I am proud of, one of my key publications that I hope will have a real impact on the field. The following paragraphs are a bit of a taster for the article:
3. Why I think the article is important: Psychological measurement instruments, mostly commonly self-report questionnaires, are at the heart of contemporary quantitative research in clinical and counselling psychology, including measures of psychological distress, well-being, attitudes toward treatment, personality dispositions, therapeutic alliance, and clients’ reactions to therapy sessions. Traditional methods for analyzing such measurement instruments include exploratory and confirmatory factor analysis and reliability analysis (Cronbach’s Alpha, Ebel’s Intraclass Correlation Coefficent). These methods provide answers about the number of underlying dimensions that make up the psychological construct being measured, about how well the group of items in the instrument hang together, and about items that don’t seem to doing a good job of measuring the construct.
It turns out, however, that these traditional psychometric methods use only a small proportion of the information available, and yield a fairly limited range of diagnostic feedback. Beginning in the 1960’s, Danish mathematician Georg Rasch (1960, 1980) developed a new model for social science measurement known today as the basic form of Item Response Theory (IRT), or Rasch Analysis. The underlying theory of most Rasch models specifies that useful measurement consists of a single-dimensional concept arranged in a consistent pattern (e.g., more than/less than) along an equal-interval continuum. If the data fit the Rasch model, there are various cool things that can be done with them, because they can then be interpreted in terms of abstract, equal-interval units (Bond & Fox, 2001). This makes them “ruler-like”.
4. Eight cool things you can do with Rasch Analysis:
1. Scale points. Determine number and anchoring of scale points.5. Article abstract (Warning: The following is the official summary of the article, but it’s pretty dense and technical. We spent a lot of time getting as much information as possible into the 125-word limit, like some sort of warped mega-haiku, only less poetic.)
2. Internal reliability/scale length. Improve scale internal consistency and efficiency by dropping unnecessary scale points and misfitting items. (Note: Traditional approach can do this, but Rasch analysis has broader range and more precise methods.)
3. Respondent validity assessment. Identify individual respondents whose inconsistent (“misfitting”) patterns of responding indicate that their data are invalid.
4. Separation/range. Evaluate the number of distincts groups or strata of within your items (item separation) and population (person separation) that the measure can discriminate.
5. Construct validity (order/fit). Use Person-Item map (ranking of items and people on the measured dimension) and fit statistics to evaluate construct validity of measure in relation to the expected hierarchical structure of the variable.
6. Measurement gaps. Identify measurement gaps (and redundancies) on the variable in need of additional items.
7. Sampling gaps. Identify sampling gaps (and redundancies) on the variable in the need of further research.
8. Theory testing and development. Test and refine theories about sequence, development, rank of constructs (by use of order analyses).
Rasch analysis was used to illustrate the usefulness of item-level analyses for evaluating a common therapy outcome measure of general clinical distress, the Symptom Checklist-90-Revised (SCL-90-R, Derogatis, 1994). Using complementary therapy research samples, we found that the instrument’s 5-point rating scale exceeded clients’ ability to make reliable discriminations and could be improved by collapsing it into a 3-point version (combining scales points 1 with 2 and 3 with 4). This, plus removing three misfitting items, increased person separation from 4.90 to 5.07 and item separation from 7.76 to 8.52 (resulting in alphas of .96 and .99 respectively). Some SCL-90-R subscales had low internal-consistency reliabilities; SCL-90-R items can be used to define one factor of general clinical distress that is generally stable across both samples, with two small residual factors.6. References:
Article: Elliott, R., Fox, C.M., Beltyukova, S.A., Stone, G.E., Gunderson, J., & Zhang, Xi. (2006). Deconstructing therapy outcome measurement with Rasch analysis: The SCL-90-R. Psychological Assessment, 18, 359-372.
General: Bond, T.G., & Fox, C.M. (2001). Applying the Rasch Model: Fundamental Measurement in the Human Sciences. Mahwah, NJ: Lawrence Erlbaum Associates.
No comments:
Post a Comment