I recently reviewed G. J. Cizek’s book Validity – An Integrated Approach to Test Score Meaning and Use (published by Routledge, 2020) for the journal Applied Measurement in Education. Here’s a link to my review.
Here’s an overview, from the first paragraph in the review.
Can measurement inferences be meaningful but not useful? Are we better off evaluating test score interpretations separate from their applications? Does validity theory itself need to be revamped? These are the kinds of big philosophical questions Cizek tackles, though with limited philosophizing, in his book Validity – An Integrated Approach to Test Score Meaning and Use. The premise of the book, that validity does need revamping, won’t come as a surprise to readers familiar with his earlier writing on the topic. The main ideas are the same, as are some of his testing examples and metaphors. However, the book does give Cizek space to elaborate on his comprehensive framework for defensible testing, and the target audience of “rigorous scholars and practitioners… who have no wish to be philosophers of science” may appreciate the book’s focus on pragmatic recommendations over “metaphysical contemplations.”
And here’s my synopsis of the book by chapter.
After an intriguing preface (current validation efforts are described as anemic and lacking in alacrity), the book starts with an introduction to some foundational testing concepts (Chapter 1), and then reviews areas of consensus in validation (e.g., content, response process, convergent evidence; Chapter 2), before highlighting the essential point of disagreement (i.e., how we handle test uses and consequences; Chapter 3). Cizek’s main argument, reiterated throughout the book, is that considerations around score inference should nearly always be detached from considerations around test use, and that combining the two (common in the US since the 1990s) has been counterproductive. He presents a framework that separates a) validation of the intended meaning of scores via the usual sources of evidence, minus uses and consequences (Chapter 4), from b) justifying the intended uses of scores, following theory and methods from program evaluation (Chapter 5). The book ends with recommendations for determining how much evidence is enough for successful validation and justification (Chapter 6), and, finally, a summary with comments on future directions (Chapter 7).
Throughout the book, Cizek critiques the writings of Messick, a distinguished validity theorist, and he acknowledges in the book’s preface that doing so felt like tugging on Superman’s cape. I’m not sure where that puts me, someone who has only ever written about validity as it relates to other issues like item bias. I guess I’m either spitting into the wind or pulling the mask off the Old Lone Ranger.
Though I agree with Cizek on some key issues – including that validity theory is becoming impractically complex – my review of the book ended up being mostly critical. Maybe half of my 1800 or so words went to summarizing two limitations that I see in the book. First, it oversimplifies and sometimes misrepresents the alternative and more mainstream perspective that uses and consequences should be part of validity. Quotations and summaries of the opposing views could have been much tighter (I highlight a few in my review). Second, the book leaves us wanting more on the question of how to integrate information – if we evaluate testing in two stages, based on meaning in scores and justification of uses, how do we combine results to determine if a test is defensible? The two stages are discussed separately, but the crucial integration step isn’t clearly explained or demonstrated.
I do like how the book lays out program evaluation as a framework for evaluating (some would say validating) uses and consequences. Again, it’s unclear how we integrate conclusions from this step with our other validation efforts in establishing score meaning. But program evaluation is a nice fit to the general problem of justifying test use. It offers us established procedures and best practices for study design, data collection, and analyzing and interpreting results.
I also appreciate that Cizek is questioning the ever creeping scope of validity. Uses and consequences can be relevant to validation, and shouldn’t be ignored, but they can also be so complex and open-ended as to make validation unmanageable. Social responsibility and social justice – which have received a lot of attention in the measurement literature in the past three years and so aren’t addressed in their latest form in the book – are a pertinent example. To what extent should antiracism be a component of test design? To what extent should adverse impact in test results invalidate testing? And who’s to say? I still have some reading to do (Applied Measurement in Education has a new special issue on social justice topics), but it seems like proponents would now argue, in the most extreme case, that any group difference justifies pausing or reconsidering testing. Proposals like this need more study and discussion (similar to what we had on social responsibility in admission testing) before they’re applied generally or added to our professional standards.