Article on Intersectional DIF in Applied Measurement in Education

Brian French, Thao Thu Vo, and I recently (February, 2024) published an open-access paper in Applied Measurement in Education on Traditional vs Intersectional DIF Analysis: Considerations and a Comparison Using State Testing Data.

https://doi.org/10.1080/08957347.2024.2311935

The paper extends research by Russell and colleagues (e.g., 2021) on intersectional differential item functioning (DIF).

Here’s our abstract.

Recent research has demonstrated an intersectional approach to the study of differential item functioning (DIF). This approach expands DIF to account for the interactions between what have traditionally been treated as separate grouping variables. In this paper, we compare traditional and intersectional DIF analyses using data from a state testing program (nearly 20,000 students in grade 11, math, science, English language arts). We extend previous research on intersectional DIF by employing field test data (embedded within operational forms) and by comparing methods that were adjusted for an increase in Type I error (Mantel-Haenszel and logistic regression). Intersectional analysis flagged more items for DIF compared with traditional methods, even when controlling for the increased number of statistical tests. We discuss implications for state testing programs and consider how intersectionality can be applied in future DIF research.

We refer to intersectional DIF as DIF with interaction effects, partly to highlight the methodology – which builds on traditional DIF as an analysis of main effects – and to distinguish it as one piece of a larger intersectional perspective on the item response process. We don’t get into the ecology of item responding (Zumbo et al., 2015), but that’s the idea – traditional DIF just scratches the surface.

A few things keep DIF analysis on the surface.

  1. More complex analysis would require larger sample sizes for field/pilot testing. We’d have to plan and budget for it.
  2. Better analysis would also require a theory of test bias that developers may not be in a position to articulate. This brings in the debate on consequential validity evidence – who is responsible for investigating test bias, and how extensive does analysis need to be?
  3. Building on 2, only test developers have ready access to the data needed for DIF analysis. Other researchers and the public, who might have good input, aren’t involved. I touch on this idea in a previous post.

References

Albano, T., French, B. F., & Vo, T. T. (2024). Traditional vs intersectional DIF analysis: Considerations and a comparison using state testing data. Applied Measurement in Education, 37(1), 57-70. https://doi.org/10.1080/08957347.2024.2311935

Russell, M., & Kaplan, L. (2021). An intersectional approach to differential item functioning: Reflecting configurations of inequality. Practical Assessment, Research & Evaluation, 26(21), 1-17.

Zumbo, B. D., Liu, Y., Wu, A. D., Shear, B. R., Olvera Astivia, O. L., & Ark, T. K. (2015). A methodology for Zumbo’s third generation DIF analyses and the ecology of item responding. Language Assessment Quarterly, 12(1), 136-151. https://doi.org/10.1080/15434303.2014.972559

Review of Cizek’s Validity Book

I recently reviewed G. J. Cizek’s book Validity – An Integrated Approach to Test Score Meaning and Use (published by Routledge, 2020) for the journal Applied Measurement in Education. Here’s a link to my review.

Here’s an overview, from the first paragraph in the review.

Can measurement inferences be meaningful but not useful? Are we better off evaluating test score interpretations separate from their applications? Does validity theory itself need to be revamped? These are the kinds of big philosophical questions Cizek tackles, though with limited philosophizing, in his book Validity – An Integrated Approach to Test Score Meaning and Use. The premise of the book, that validity does need revamping, won’t come as a surprise to readers familiar with his earlier writing on the topic. The main ideas are the same, as are some of his testing examples and metaphors. However, the book does give Cizek space to elaborate on his comprehensive framework for defensible testing, and the target audience of “rigorous scholars and practitioners… who have no wish to be philosophers of science” may appreciate the book’s focus on pragmatic recommendations over “metaphysical contemplations.”

And here’s my synopsis of the book by chapter.

After an intriguing preface (current validation efforts are described as anemic and lacking in alacrity), the book starts with an introduction to some foundational testing concepts (Chapter 1), and then reviews areas of consensus in validation (e.g., content, response process, convergent evidence; Chapter 2), before highlighting the essential point of disagreement (i.e., how we handle test uses and consequences; Chapter 3). Cizek’s main argument, reiterated throughout the book, is that considerations around score inference should nearly always be detached from considerations around test use, and that combining the two (common in the US since the 1990s) has been counterproductive. He presents a framework that separates a) validation of the intended meaning of scores via the usual sources of evidence, minus uses and consequences (Chapter 4), from b) justifying the intended uses of scores, following theory and methods from program evaluation (Chapter 5). The book ends with recommendations for determining how much evidence is enough for successful validation and justification (Chapter 6), and, finally, a summary with comments on future directions (Chapter 7).

Throughout the book, Cizek critiques the writings of Messick, a distinguished validity theorist, and he acknowledges in the book’s preface that doing so felt like tugging on Superman’s cape. I’m not sure where that puts me, someone who has only ever written about validity as it relates to other issues like item bias. I guess I’m either spitting into the wind or pulling the mask off the Old Lone Ranger.

Though I agree with Cizek on some key issues – including that validity theory is becoming impractically complex – my review of the book ended up being mostly critical. Maybe half of my 1800 or so words went to summarizing two limitations that I see in the book. First, it oversimplifies and sometimes misrepresents the alternative and more mainstream perspective that uses and consequences should be part of validity. Quotations and summaries of the opposing views could have been much tighter (I highlight a few in my review). Second, the book leaves us wanting more on the question of how to integrate information – if we evaluate testing in two stages, based on meaning in scores and justification of uses, how do we combine results to determine if a test is defensible? The two stages are discussed separately, but the crucial integration step isn’t clearly explained or demonstrated.

I do like how the book lays out program evaluation as a framework for evaluating (some would say validating) uses and consequences. Again, it’s unclear how we integrate conclusions from this step with our other validation efforts in establishing score meaning. But program evaluation is a nice fit to the general problem of justifying test use. It offers us established procedures and best practices for study design, data collection, and analyzing and interpreting results.

I also appreciate that Cizek is questioning the ever creeping scope of validity. Uses and consequences can be relevant to validation, and shouldn’t be ignored, but they can also be so complex and open-ended as to make validation unmanageable. Social responsibility and social justice – which have received a lot of attention in the measurement literature in the past three years and so aren’t addressed in their latest form in the book – are a pertinent example. To what extent should antiracism be a component of test design? To what extent should adverse impact in test results invalidate testing? And who’s to say? I still have some reading to do (Applied Measurement in Education has a new special issue on social justice topics), but it seems like proponents would now argue, in the most extreme case, that any group difference justifies pausing or reconsidering testing. Proposals like this need more study and discussion (similar to what we had on social responsibility in admission testing) before they’re applied generally or added to our professional standards.

Calculating Implicit Association Test Scores

I wrote a couple years ago about the limitations of implicit association tests (IAT) for measuring racial bias. Their reliability (test-retest) and validity (correlations with measures of overt bias) are surprisingly low, considering the popularity of the tests.

At the time, I couldn’t find an explanation of how IAT scores are calculated (I didn’t look very hard). Here are a few references.

Some of the original scoring methods come from Greenwald, McGhee, and Schwartz (1998) and updated methods are given in Greenwald, Nosek, and Banjo (2003). All of the methods are based on response latencies measured in milliseconds. Rohner and Thoss (2019) summarize how the methods work and demonstrate with R code.

References

Greenwald, A., McGhee, D., & Schwartz, J. (1998). Measuring individual differences in implicit cognition: The Implicit Association Test. Journal of Personality and Social Psychology, 74, 1464-1480.

Greenwald, A., Nosek, B., & Banaji, M. (2003). Understanding and using the Implicit Association Test: An improved scoring algorithm. Journal of Personality and Social Psychology, 85, 197-216.

Röhner, J. & Thoss, P. J. (2019) A tutorial on how to compute traditional IAT effects with R. The Quantitative Methods for Psychology, 15(2), 134-147. https://doi.org/10.20982/tqmp.15.2.p134

Differential Item Functioning in the Smarter Balanced Test

In class last fall, we reviewed the Smarter Balanced (SB) technical report for examples of how validity evidence is collected and documented, including through differential item functioning (DIF) analysis.

I teach and research DIF, but I don’t often inspect operational results from a large-scale standardized test. Results for race/ethnicity showed a few unexpected trends. Here’s a link to the DIF section of the 2018/2019 technical report.

https://technicalreports.smarterbalanced.org/2018-19_summative-report/_book/test-fairness.html#differential-item-functioning-dif

The report gives an overview of the Mantel-Haenszel method, and then shows, for ELA/literacy and math, numbers of items from the test bank per grade and demographic variable that fall under each DIF category.

  • The NA category is for items that didn’t have enough valid responses, for a given comparison (eg, female vs male), to estimate DIF. Groups with smaller sample sizes had more items with NA.
  • A, B, C are the usual Mantel-Haenszel levels of DIF, where A is negligible, B is moderate, and C is large. Testing programs, including SB, focus on items at level C and mostly leave A and B alone.
  • The +/- indicates the direction of the DIF, where negative is for items that favor the reference group (eg, male) or disadvantage the focal group (eg, female), and positive is for items that do the opposite, favor the focal group or disadvantage the reference group.

The SB report suggests that DIF was conducted at the field test stage, where items weren’t yet operational. But the results tables say “DIF items in the current summative pool,” which makes it sound like they include operational items. I’m not sure how this worked.

ELA

Here’s a bar chart that summarizes level C DIF by grade for ELA in a subset of demographic comparisons. The blueish bars going up are percentages of items with C+ DIF (favoring focal group) and the redish bars going down are for C- (favoring reference). The groups being compared are labeled on the right side.

Smarter Balanced 2018/2019 DIF results, percentages of items with level C DIF for ELA/literacy

I’m using percentages instead of counts of items because the number of items differs by grade (under 1,000 in early grades, over 2,000 in grade 11), and the number of items with data for DIF analysis varies by demographic group (some groups had more NA than others). Counts would be more difficult to compare. These percentages exclude items in the NA category.

For ELA, we tend to see more items favoring female (vs male) and asian (vs white) students. There doesn’t seem to be a trend for black and white students, but there are more items favoring white students when compared with hispanic (almost none). In some groups, we also see a slight increase for later grades, but a decrease at grade 11.

Math

Here’s the same chart but for math items. Note the change in y-axis (now maxing at 4 percent instead of 2 for ELA) to accommodate the increase in DIF favoring asian students (vs white). Other differences from ELA include slightly more items favoring male students (vs female), and more balance in results for black and white students, and hispanic and white students.

DIF in grades 6, 7, and 11 reaches 3 to 4% of items favoring asian students. Converting these back to counts, the total numbers of items with data for DIF analysis are 1,114, 948, and 966 in grades 6, 7, and 11, respectively, and the numbers of C+ DIF favoring asian students are 35, 30, and 38.

Conclusions

These DIF results are surprising, especially for the math test, but I’d want some more information before drawing conclusions.

First, what was the study design supporting the DIF analysis? The technical report doesn’t describe how and when data were collected. Within a given grade and demographic group, do these results accumulate data from different years and different geographic locations? If so, how were forms constructed and administered? Were field test items embedded within the operational adaptive test? And how were results then linked?

Clarifying the study design and scaling would help us understand why so many items had insufficient sample sizes for estimating DIF analysis, and why these item numbers in the NA category differed by grade and demographic group. Field test items are usually randomly assigned to test takers, which would help ensure numbers of respondents are balanced across items.

Finally, the report leaves out some key details on how the Mantel-Haenszel DIF analysis was conducted. We have the main equations, but we don’t have information about what anchor/control variable was used (eg, total score vs scale score), whether item purification was used, and how significance testing factored into determining the DIF categories.

Linking vs Mapping vs Predicting

I recently came across a few articles that discuss scale linking in the health sciences, where researchers measure things like psychological distress, well-being, and fatigue, and need to convert patient results from one instrument to another. The literature refers to the process as mapping (Wailoo et al, 2017) but the goals seem to be the same as with other forms of scaling, linking, and equating in education and psychology.

Fayers and Hays (2014) talk about how mapping with health scales is typically accomplished using regression models, which can produce biased results because of regression to the mean. They recommend linking methods. Thompson, Lapin, and Katzan (2017) demonstrate linking with linear and equipercentile functions.

On a related note, someone also shared Bottai et al (2022), which derives a linear prediction function, based on the concordance correlation from Lin (1989), that ends up being linear equating.

References

Bottai, M., Kim, T., Lieberman, B., Luta, G., & Peña, E. (2022). On optimal correlation-based prediction. The American Statistician76(4), 313-321. https://doi.org/10.1080/00031305.2022.2051604

Fayers, P. M., & Hays, R. D. (2014). Should linking replace regression when mapping from profile-based measures to preference-based measures? Value in Health, 17(2), 261-265. http://dx.doi.org/10.1016/j.jval.2013.12.002

Lin, L. (1989). A concordance correlation coefficient to evaluate reproducibility. Biometrics, 45, 255–268.

Thompson, N. R., Lapin, B. R., & Katzan, I. L. (2017). Mapping PROMIS global health items to EuroQol (EQ-5D) utility scores using linear and equipercentile equating. Pharmacoeconomics, 35, 1167-1176. http://dx.doi.org/10.1007/s40273-017-0541-1

Wailoo, A. J., Hernandez-Alava, M., Manca, A., Mejia, A., Ray, J., Crawford, B., Botteman, M., & Busschbach, J. (2017). Mapping to estimate health-state utility from non–preference-based outcome measures: An ISPOR good practices for outcomes research task force report. Value in Health20(1), 18-27. http://dx.doi.org/10.1016/j.jval.2016.11.006

More issues in the difR package for differential item functioning analysis in R

I wrote last time about the difR package (Magis, Beland, Tuerlinckx, & De Boeck, 2010) and how it doesn’t account for missing data in Mantel-Haenszel DIF analysis. I’ve noticed two more issues as I’ve continued testing the package (version 5.1).

  1. The problem with Mantel-Haenszel also appears in the code for the standardization method, accessed via difR:::difStd, which calls difR:::stdPDIF. Look there and you’ll see base:::length used to obtain counts (e.g., number of correct/incorrect for focal and reference groups at a given score level). Missing data will throw off these counts. So, difR standardization and MH are only recommended with complete data.
  2. In the likelihood ratio method, code for pseudo $R^2$ (used as a measure of DIF effect size) can lead to errors for some models. The code also seems to assume no missing data. More on these issues below.

DIF with the likelihood ratio method is performed using the difR:::difLogistic function, which ultimately calls difR:::Logistik to do the modeling (via glm) and calculate the $R^2$. The functions for calculating $R^2$ are embedded within the difR:::Logistik function.

R2 <- function(m, n) {
  1 - (exp(-m$null.deviance / 2 + m$deviance / 2))^(2 / n)
}
R2max <- function(m, n) {
  1 - (exp(-m$null.deviance / 2))^(2 / n)
}
R2DIF <- function(m, n) {
  R2(m, n) / R2max(m, n)
}

These functions capture $R^2$ as defined by Nagelkerke (1991), which is a modification to Cox and Snell (1989). When these are run via difR:::Logistik, the sample size n argument is set to the number of rows in the data set, which ignores missing data on a particular item. So, n will be inflated for items with missing data, and $R^2$ will be reduced (assuming a constant deviance).

In addition to the missing data issue, because of the way they’re written, these functions stretch the precision limits of R. In the R2max function specifically, the model deviance is first converted to a log-likelihood, and then a likelihood, before raising to 2/n. The problem is, large deviances correspond to very small likelihoods. A deviance of 500 gives us a likelihood of 7.175096e-66, which R can manage. But a deviance of 1500 gives us a likelihood of 0, which produces $R^2 = 1$.

The workaround is simple – avoid calculating likelihoods by rearranging terms. Here’s how I’ve written them in the epmr package.

r2_cox <- function(object, n = length(object$y)) {
  1 - exp((object\$deviance - object\$null.deviance) / n)
}
r2_nag <- function(object, n = length(object$y)) {
  r2_cox(object, n) / (1 - exp(-object$null.deviance / n))
}

And here are two examples that compare results from difR with epmr and DescTools. The first example shows how roughly 10% missing data reduces $R^2$ by as much as 0.02 when using difR. Data come from the verbal data set, included in difR.

# Load example data from the difR package
# See ?difR:::verbal for details
data("verbal", package = "difR")

# Insert missing data on first half of items
set.seed(42)
np <- nrow(verbal)
ni <- 24
na_index <- matrix(
  sample(c(TRUE, FALSE), size = np * ni / 2,
    prob = c(.1, .9), replace = TRUE),
  nrow = np, ncol = ni / 2)
verbal[, 1:(ni / 2)][na_index] <- NA

# Get R2 from difR
# verbal[, 26] is the grouping variable gender
verb_total <- rowSums(verbal[, 1:ni], na.rm = TRUE)
verb_difr <- difR:::Logistik(verbal[, 1:ni],
  match = verb_total, member = verbal[, 26],
  type = "udif")

# Fit the uniform DIF models by hand
# To test for DIF, we would compare these with base
# models, not fit here
verb_glm <- vector("list", ni)
for (i in 1:ni) {
  verbal_sub <- data.frame(y = verbal[, i],
    t = verb_total, g = verbal[, 26])
  verb_glm[[i]] <- glm(y ~ t + g, family = "binomial",
    data = verbal_sub)
}

# Get R2 from epmr and DescTools packages
verb_epmr <- sapply(verb_glm, epmr:::r2_nag)
verb_desc <- sapply(verb_glm, DescTools:::PseudoR2,
  which = "Nag")

# Compare
# epmr and DescTools match for all items
# difR matches for the last 12 items, but R2 on the
# first 12 are depressed because of missing data
verb_tab <- data.frame(item = 1:24,
  pct_na = apply(verbal[, 1:ni], 2, epmr:::summiss) / np,
  difR = verb_difr$R2M0, epmr = verb_epmr,
  DescTools = verb_desc)

This table shows results for items 9 through 16, the last four items with missing data and the first four with complete data.

item pct_na difR epmr DescTools
9 0.089 0.197 0.203 0.203
10 0.085 0.308 0.318 0.318
11 0.139 0.408 0.429 0.429
12 0.136 0.278 0.293 0.293
13 0.000 0.405 0.405 0.405
14 0.000 0.532 0.532 0.532
15 0.000 0.370 0.370 0.370
16 0.000 0.401 0.401 0.401
Some results from first example

The second example shows a situation where $R^2$ in the difR package comes to 1. Data are from the 2009 administration of PISA, included in epmr.

# Prep data from epmr::PISA09
# Vector of item names
rsitems <- c("r414q02s", "r414q11s", "r414q06s",
  "r414q09s", "r452q03s", "r452q04s", "r452q06s",
  "r452q07s", "r458q01s", "r458q07s", "r458q04s")

# Subset to USA and Canada
pisa <- subset(PISA09, cnt %in% c("USA", "CAN"))

# Get R2 from difR
pisa_total <- rowSums(pisa[, rsitems],
  na.rm = TRUE)
pisa_difr <- difR:::Logistik(pisa[, rsitems],
  match = pisa_total, member = pisa$cnt,
  type = "udif")

# Fit the uniform DIF models by hand
pisa_glm <- vector("list", length(rsitems))
for (i in seq_along(rsitems)) {
  pisa_sub <- data.frame(y = pisa[, rsitems[i]],
    t = pisa_total, g = pisa$cnt)
  pisa_glm[[i]] <- glm(y ~ t + g, family = "binomial",
    data = pisa_sub)
}

# Get R2 from epmr and DescTools packages
pisa_epmr <- sapply(pisa_glm, epmr:::r2_nag)
pisa_desc <- sapply(pisa_glm, DescTools:::PseudoR2,
  which = "Nag")

# Compare
pisa_tab <- data.frame(item = seq_along(rsitems),
  difR = pisa_difr$R2M0, epmr = pisa_epmr,
  DescTools = pisa_desc)

Here are the resulting $R^2$ for each package, across all items.

item difR epmr DescTools
1 1 0.399 0.399
2 1 0.268 0.268
3 1 0.514 0.514
4 1 0.396 0.396
5 1 0.372 0.372
6 1 0.396 0.396
7 1 0.524 0.524
8 1 0.465 0.465
9 1 0.366 0.366
10 1 0.410 0.410
11 1 0.350 0.350
Results from second example

References

Cox, D. R. & Snell, E. J. (1989). The analysis of binary data. London: Chapman and Hall.

Magis, D., Beland, S, Tuerlinckx, F, & De Boeck, P. (2010). A general framework and an R package for the detection of dichotomous differential item functioning. Behavior Research Methods, 42, 847-862.

Nagelkerke, N. J. D. (1991). A note on a general definition of the coefficient of determination. Biometrika, 78, 691-692.

Issues in the difR Package Mantel-Haenszel Analysis

I’ve been using the difR package (Magis, Beland, Tuerlinckx, & De Boeck, 2010) to run differential item functioning (DIF) analysis in R. Here’s the package on CRAN.

https://cran.r-project.org/package=difR

I couldn’t get my own code to match the Mantel-Haenszel (MH) results from the difR package and it looks like it’s because there’s an issue in how the difR:::difMH function handles missing data. My code is on GitHub.

https://github.com/talbano/epmr/blob/master/R/difstudy.R

The MH DIF method is based on counts for correct vs incorrect responses in focal vs reference groups of test takers across levels of the construct (usually total scores). The code for difR:::difMH uses the length of a vector that is subset with logical indices to get the counts of test takers in each group. But missing data here will return NA in the logical comparisons, and NA isn’t omitted from length.

I’m pasting below the code from difR:::mantelHaenszel, which is called by difR:::difMH to run the MH analysis. Lines 19 to 33 all use length to find counts. This works fine with complete data, but as soon as someone has NA for an item score, captured in data[, item], they’ll figure into the count regardless of the comparisons being examined.

function (data, member, match = "score", correct = TRUE, exact = FALSE, 
    anchor = 1:ncol(data)) 
{
    res <- resAlpha <- varLambda <- RES <- NULL
    for (item in 1:ncol(data)) {
        data2 <- data[, anchor]
        if (sum(anchor == item) == 0) 
            data2 <- cbind(data2, data[, item])
        if (!is.matrix(data2)) 
            data2 <- cbind(data2)
        if (match[1] == "score") 
            xj <- rowSums(data2, na.rm = TRUE)
        else xj <- match
        scores <- sort(unique(xj))
        prov <- NULL
        ind <- 1:nrow(data)
        for (j in 1:length(scores)) {
            Aj <- length(ind[xj == scores[j] & member == 0 & 
                data[, item] == 1])
            Bj <- length(ind[xj == scores[j] & member == 0 & 
                data[, item] == 0])
            Cj <- length(ind[xj == scores[j] & member == 1 & 
                data[, item] == 1])
            Dj <- length(ind[xj == scores[j] & member == 1 & 
                data[, item] == 0])
            nrj <- length(ind[xj == scores[j] & member == 0])
            nfj <- length(ind[xj == scores[j] & member == 1])
            m1j <- length(ind[xj == scores[j] & data[, item] == 
                1])
            m0j <- length(ind[xj == scores[j] & data[, item] == 
                0])
            Tj <- length(ind[xj == scores[j]])
            if (exact) {
                if (Tj > 1) 
                  prov <- c(prov, c(Aj, Bj, Cj, Dj))
            }
            else {
                if (Tj > 1) 
                  prov <- rbind(prov, c(Aj, nrj * m1j/Tj, (((nrj * 
                    nfj)/Tj) * (m1j/Tj) * (m0j/(Tj - 1))), scores[j], 
                    Bj, Cj, Dj, Tj))
            }
        }
        if (exact) {
            tab <- array(prov, c(2, 2, length(prov)/4))
            pr <- mantelhaen.test(tab, exact = TRUE)
            RES <- rbind(RES, c(item, pr$statistic, pr$p.value))
        }
        else {
            if (correct) 
                res[item] <- (abs(sum(prov[, 1] - prov[, 2])) - 
                  0.5)^2/sum(prov[, 3])
            else res[item] <- (abs(sum(prov[, 1] - prov[, 2])))^2/sum(prov[, 
                3])
            resAlpha[item] <- sum(prov[, 1] * prov[, 7]/prov[, 
                8])/sum(prov[, 5] * prov[, 6]/prov[, 8])
            varLambda[item] <- sum((prov[, 1] * prov[, 7] + resAlpha[item] * 
                prov[, 5] * prov[, 6]) * (prov[, 1] + prov[, 
                7] + resAlpha[item] * (prov[, 5] + prov[, 6]))/prov[, 
                8]^2)/(2 * (sum(prov[, 1] * prov[, 7]/prov[, 
                8]))^2)
        }
    }
    if (match[1] != "score") 
        mess <- "matching variable"
    else mess <- "score"
    if (exact) 
        return(list(resMH = RES[, 2], Pval = RES[, 3], match = mess))
    else return(list(resMH = res, resAlpha = resAlpha, varLambda = varLambda, 
        match = mess))
}

Here’s a very simplified example of the issue. The vector 1:4 is in place of the ind object in the mantelHaenszel function (created on line 17). The vector c(1, 1, NA, 0) is in place of data[, item] (e.g., on line 20). One person has a score of 0 on this item, and two have scores of 1, but length returns count 2 for item score 0 and 3 for item score 1 because the NA is not removed by default.

length((1:4)[c(1, 1, NA, 0) == 0])
## [1] 2
length((1:4)[c(1, 1, NA, 0) == 1])
## [1] 3

With missing data, the MH counts from difR:::mantelHaenszel will all be padded by the number of people with NA for their item score. It could be that the authors are accounting for this somewhere else in the code, but I couldn’t find it.

Here’s what happens to the MH results with some made up testing data. For 200 people taking a test with five items, I’m giving a boost on two items to 20 of the reference group test takers (to generate DIF), and then inserting NA for 20 people on one of those items. MH stats are consistent across packages for the first DIF item (item 4) but not the second (item 5).

# Number of items and people
ni <- 5
np <- 200

# Create focal and reference groups
groups <- rep(c("foc", "ref"), each = np / 2)

# Generate scores
set.seed(220821)
item_scores <- matrix(sample(0:1, size = ni * np,
  replace = T), nrow = np, ncol = ni)

# Give 20 people from the reference group a boost on
# items 4 and 5
boost_ref_index <- sample((1:np)[groups == "ref"], 20)
item_scores[boost_ref_index, 4:5] <- 1

# Fix 20 scores on item 5 to be NA
item_scores[sample(1:np, 20), 5] <- NA

# Find total scores on the first three items,
# treated as anchor
total_scores <- rowSums(item_scores[, 1:3])

# Comparing MH stats, chi square matches for item 4
# with no NA but differs for item 5
epmr:::difstudy(item_scores, groups = groups,
  focal = "foc", scores = total_scores, anchor_items = 1:3,
  dif_items = 4:5, complete = FALSE)
## 
## Differential Item Functioning Study
## 
##   item  rn  fn r1 f1 r0 f0   mh  delta delta_abs chisq chisq_p ets_level
## 1    4 100 100 61 52 39 48 1.50 -0.946     0.946  1.58  0.2083         a
## 2    5  88  92 55 40 33 52 2.06 -1.701     1.701  4.84  0.0278         c
difR:::difMH(data.frame(item_scores), group = groups,
  focal.name = "foc", anchor = 1:3, match = total_scores)
## 
## Detection of Differential Item Functioning using Mantel-Haenszel method 
## with continuity correction and without item purification
## 
## Results based on asymptotic inference 
##  
## Matching variable: specified matching variable 
##  
## Anchor items (provided by the user): 
##    
##  X1
##  X2
##  X3
## 
##  
## No p-value adjustment for multiple comparisons 
##  
## Mantel-Haenszel Chi-square statistic: 
##  
##    Stat.  P-value  
## X4 1.5834 0.2083   
## X5 4.8568 0.0275  *
## 
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1  
## 
## Detection threshold: 3.8415 (significance level: 0.05)
## 
## Items detected as DIF items: 
##    
##  X5
## 
##  
## Effect size (ETS Delta scale): 
##  
## Effect size code: 
##  'A': negligible effect 
##  'B': moderate effect 
##  'C': large effect 
##  
##    alphaMH deltaMH  
## X4  1.4955 -0.9457 A
## X5  1.8176 -1.4041 B
## 
## Effect size codes: 0 'A' 1.0 'B' 1.5 'C' 
##  (for absolute values of 'deltaMH') 
##  
## Output was not captured!

One more note, when reporting MH results, the difR package only uses the absolute delta values to assign ETS significance levels (A, B, C). You can see this in the difR:::print.MH function (not shown here). Usually, the MH approach also incorporates the p-value for the chi square (Zwick, 2012).

References

Magis, D., Beland, S, Tuerlinckx, F, & De Boeck, P. (2010). A general framework and an R package for the detection of dichotomous differential item functioning. Behavior Research Methods, 42, 847–862.

Zwick, R. (2012). A review of ETS differential item functioning assessment procedures: Flagging rules, minimum sample size requirements, and criterion refinement. Princeton, NJ: Educational Testing Service. https://files.eric.ed.gov/fulltext/EJ1109842.pdf

Some Equations and R Code for Examining Intersectionality in Differential Item Functioning Analysis

A couple of papers came out last year that consider intersectionality in differential item functioning (DIF) analysis. Russell and Kaplan (2021) introduced the idea, and demonstrated it with data from a state testing program. Then, Russell, Szendey, and Kaplan (2021) replicated the first study with more data. This is a neat application of DIF, and I’m surprised it hasn’t been explored until now. I’m sure we’ll see a flurry of papers on it in the next few years.

Side note, the second Russell study, published in Educational Assessment, doesn’t seem justified as a separate publication. They use the same DIF method as in the first paper, they appear to use the same data source, and they have similar findings. They also don’t address in the second study any of the limitations of the original study (e.g., they still use a single DIF method, don’t account for Type I error increase, don’t have access to item content, don’t have access to pilot vs operational items). The second study really just has more data.

Why is the intersectional approach neat? Because it can give us a more accurate understanding of potential item bias, to the extent that it captures a more realistic representation of the test taker experience.

The intersectional approach to DIF is a simple extension of the traditional approach, one that accounts for interactions among grouping variables. We can think of the traditional approach as focusing on main effects for distinct variables like gender (female compared with male) and race (Black compared with White). The intersectional approach simply interacts the grouping variables to examine the effects of membership in intersecting groups (e.g., Black female compared with White male).

Interaction DIF models

I like to organize DIF problems using explanatory item response theory (Rasch) models. In the base model, which assumes no DIF, the log-odds $\eta_{ij}$ of correct response on item $i$ for person $j$ can be expressed as a linear function of overall mean performance $\gamma_0$ plus mean performance on the item $\beta_{i}$ and the person $\theta_j$:

$$\eta_{ij} = \gamma_0 + \beta_i + \theta_j,$$

with $\beta$ estimated as a fixed effect and $\theta \sim \mathcal{N}(\gamma_0, \, \sigma^{2})$. $\gamma_0 + \beta_i$ captures item difficulty, with higher values indicating easier items.

Before we formulate DIF, we estimate a shift in mean performance by group:

$$\eta_{ij} = \gamma_0 + \gamma_{1}group_j + \beta_i + \theta_j.$$

In a simple dichotomous comparison, we can use indicator coding in $group$, where the reference group is coded as 0 and the focal group as 1. Then, $\gamma_0$ estimates the mean performance for the reference group and $\gamma_1$ is the impact or disparity for the focal group expressed as a difference from $\gamma_0$. To estimate DIF, we interact group with item:

$$\eta_{ij} = \gamma_0 + \gamma_{1}group_j + \beta_{0i} + \beta_{1i}group_j + \theta_j.$$

Now, $\beta_{0i}$ is the item difficulty estimate for the reference group and $\beta_{1i}$ is the DIF effect, expressed as a difference in performance on item $i$ for the focal group, controlling for $\theta$.

The previous equation captures the traditional DIF approach. Separate models would be estimated, for example, with gender in one model and then race/ethnicity in another. The interaction effect DIF approach consolidates terms into a single model with multiple grouping variables. Here, we replace $group$ with $f_j$ for female and $b_j$ for Black:

$$\eta_{ij} = \gamma_0 + \gamma_{1}f_j + \gamma_{2}b_j + \gamma_{3}f_{j}b_j + \beta_{0i} + \beta_{1i}f_j + \beta_{2i}b_j + \beta_{3i}f_{j}b_j + \theta_j.$$

With multiple grouping variables, again using indicator coding, $\gamma_0$ estimates the mean performance for the reference group White male and $\gamma_{1}$, $\gamma_{2}$, and $\gamma_3$ are the deviations in mean performance for White women, Black men, and Black women, respectively, from the reference group. The $\beta$ terms are interpreted similarly but in reference to performance on item $i$, with $\beta_1$, $\beta_2$, and $\beta_3$ as DIF effects.

R code

Here’s what the above models look like when translated to lme4 (Bates et al, 2015) notation in R.

# lme4 code for running interaction effect DIF via explanatory Rasch
# modeling, via generalized linear mixed model
# family specifies the binomial/logit link function
# data_long would contain scores in a long/tall/stacked format
# with one row per person per item response
# item, person, f, and b are then separate columns in data_long

# Base model
glmer(score ~ 1 + item + (1 | person),
  family = "binomial", data = data_long)

# Gender DIF with main effects
glmer(score ~ 1 + f + item + f:item + (1 | person),
  family = "binomial", data = data_long)

# Race/ethnicity DIF with main effects
glmer(score ~ 1 + b + item + b:item + (1 | person),
  family = "binomial", data = data_long)

# Gender and race/ethnicity DIF with interaction effects
glmer(score ~ 1 + f + b + item + f:b + f:item + b:item + f:b:item + (1 | person),
  family = "binomial", data = data_long)

# Shortcut for writing out the same formula as the previous model
# This notation will automatically create all main effects and
# 2x and 3x interactions
glmer(score ~ 1 + f * b * item + (1 | person),
  family = "binomial", data = data_long)

In my experience, modeling fixed effects for items like this is challenging in lme4 (slow, with convergence issues). Random effects for items would simplify things, but we would have to adopt a different theoretical perspective, where we’re less interested in specific items and more interested in DIF effects, and the intersectional experience, overall.

Here’s what the code looks like with random effects for items and persons. In place of DIF effects, this will produce variances for each DIF term, which tell us how variable the DIF effects are across items by group.

# Gender and race/ethnicity DIF with interaction effects
# Random effects for items and persons
glmer(score ~ 1 + f + b + f:b + (1 + f + b + f:b | item) + (1 | person),
  family = "binomial", data = data_long)

# Alternatively
glmer(score ~ 1 + f * b + (1 + f * b | item) + (1 | person),
  family = "binomial", data = data_long)

While lme4 provides a flexible framework for explanatory Rasch modeling (Doran et al, 2007), DIF analysis gets complicated when we consider anchoring, which I’ve ignored in the equations and code above. In practice, ideally, our IRT model would include a subset of items where we are confident that DIF is negligible. These items anchor our scale and provide a reference point for comparing performance on the potentially problematic items.

The mirt R package (Chalmers, 2012) has a lot of nice features for conducting DIF analysis via IRT. Here’s how we get at main effects and interaction effects DIF using mirt:::multipleGroup and mirt:::DIF. The former runs the model and the latter reruns it, testing the significance of the multi group extension by item.

# mirt code for interaction effect DIF

# Estimate the multi group Rasch model
# Here, data_wide is a data frame containing scored item responses in
# columns, one per item
# group_var is a vector of main effect or interacting group values,
# one per person (e.g., "fh" and "mw" for female-hispanic and male-white)
# anchor_items is a vector of item names, matching columns in data_wide,
# for the items that are not expected to vary by group, these will
# anchor the scale prior to DIF analysis
# See the mirt help files for more info
mirt_mg_out <- multipleGroup(data_wide, model = 1, itemtype = "Rasch",
  group = group_var,
  invariance = c(anchor_items, "free_means", "free_variances"))

# Run likelihood ratio DIF analysis
# For each item, the original model is fit with and without the
# grouping variable specified as an interaction with item
# Output will then specify whether inclusion of the grouping variable
# improved model fit per item
# items2test identifies the columns for DIF analysis
# Apparently, items2test has to be a numeric index, I can't get a vector
# of item names to work, so these would be the non-anchor columns in
# data_wide
mirt_dif_out <- DIF(mirt_mg_out, "d", items2test = dif_items)

One downside to the current setup of mirt:::multipleGroup and mirt:::DIF is there isn’t an easy way to iterate through separate focal groups. The code above will test the effects of the grouping variable all at once. So, we’d have to run this separately for each dichotomous comparison (e.g., subsetting the data to Hispanic female vs White male, then Black female vs White male, etc) if we want tests by focal group.

Of course, interaction effects DIF can also be analyzed outside of IRT (e.g., with the Mantel-Haenszel method). It simply involves more comparisons per item than with the main effect approach where we consider each grouping variable separately. For example, gender (with two levels, female, male) and race (with three levels, Black, Hispanic, White) gives us 3 comparisons per item with main effects, whereas we have 5 comparisons per item with interaction effects.

After writing up all this example code, I’m realizing it would be much more useful if I demonstrated it with output. I try to round up some data and share results in a future post.

References

Bates, Douglas, Martin Mächler, Ben Bolker, and Steve Walker. 2015. Fitting Linear Mixed-Effects Models Using lme4. Journal of Statistical Software 67 (1): 1–48.

Chalmers, R. P. (2012). mirt: A multidimensional item response theory package for the R environment. Journal of Statistical Software, 48, 1–29.

Doran, H., D. Bates, P. Bliese, and M. Dowling. 2007. Estimating the Multilevel Rasch Model: With the lme4 Package. Journal of Statistical Software 20 (2): 1–18.

Russell, M., & Kaplan, L. (2021). An intersectional approach to differential item functioning: Reflecting configurations of inequality. Practical Assessment, Research & Evaluation26(21), 1–17.

Russell, M., Szendey, O., & Kaplan, L. (2021). An intersectional approach to DIF: Do initial findings hold across tests? Educational Assessment26, 284–298.

Community Engagement in Assessment Development

In a commentary article from 2021 on social responsibility in admission testing (Albano, 2021), I recommended that we start crowd-sourcing the test development process.

By crowd-sourced development, I mean that the public as a community will support the review of content so as to organically and dynamically improve test quality. Not only does this promise to be more transparent and efficient than review by selected groups, but, with the right training, it also empowers the public to contribute directly to assessing fairness, sensitivity, and accessibility. Furthermore, a more diverse population, potentially the entire target population, will have access to the test, which will facilitate the rapid development of content that is more representative of and engaging for historically marginalized and underrepresented groups. This community involvement need not replace or diminish expert review. It can supplement it.

The idea of crowd-sourcing item writing and review has been on my mind for a decade or so. I pursued it while at the University of Nebraska, creating a web app (https://proola.org, now defunct) intended to support educators in sharing and getting feedback on their classroom assessment items. We piloted the app with college instructors from around the US to build a few thousand openly-licensed questions (Miller & Albano, 2017). But I couldn’t keep the momentum going after that and the project fizzled out.

Also while at Nebraska, I worked with Check for Learning (C4L, also now defunct I believe), a website managed by the Nebraska Department of Education that let K12 teachers from across the state share formative assessment items with one another. The arrangement was that a teacher would contribute a certain number of items to the bank before they could administer questions from C4L in their classroom. If I remember right, the site was maintained for a few years but ultimately shut down because of a lack of interest.

In these two examples, we can think of the item writing process as being spread out horizontally. Instead of the usual limited and controlled sample, access is given to a wider “crowd” of content experts. In the case of C4L, the entire population of teachers could contribute to the shared item bank.

Extending this idea, we can think of community engagement as distributing assessment development vertically to other populations, where we expand both on a) what we consider to be appropriate content, and b) who we consider to be experts in it.

In addition to working with students and educators, engaging the community could involve surveying family members or interviewing community leaders to better understand student backgrounds and experiences. We might review outlines/frameworks together, and get feedback on different contexts, modes, and methods of assessment. We could discuss options for assessment delivery and technology, and how to best communicate regarding assessment preparation, practice at home, and finally interpreting results.

I am hearing more discussion lately about increasing community engagement in assessment development. The aim is to decolonize and create culturally relevant/sustaining content, while also enhancing transparency and buy-in at a more local level. This comes alongside, or maybe in the wake of, a broader push to revise our curricula and instruction to be more oriented toward equity and social justice.

I’m still getting into the literature, but these ideas seem to have taken shape in the context of educational assessment, and then testing and measurement more specifically, in the 1990s. Here’s my current reading list from that timeframe.

  • Ladson-Billings and Tate (1995) introduce critical race theory in education as a framework and method for understanding educational inequities. In parallel, Ladson-Billings (1995) outlines culturally responsive pedagogy.
  • Moss (1996) argues for a multi-method approach to validation, where we leverage the contrast between traditional “naturalist” methods with contextualized “interpretive” ones, with the goal of “expanding the dialogue among measurement professionals to include voices from research traditions different from ours and from the communities we study and serve” (p 20).
  • Lee (1998), referencing Ladson-Billings, applies culturally responsive pedagogy to improve the design of performance assessments “that draw on culturally based funds of knowledge from both the communities and families of the students” and that “address some community-based, authentic need” (p 273).
  • Gipps (1999) highlights the importance of social and cultural considerations in assessment, referencing Moss among others, within a comprehensive review of the history of testing and its epistemological strengths and limitations.
  • Finally, Shepard (2000), referencing Gipps among others, provides a social-constructivist framework for assessment in support of teaching and learning, one that builds on cognitive, constructivist, and sociocultural theories.

References

Albano, A. D. (2021). Commentary: Social responsibility in college admissions requires a reimagining of standardized testing. Educational Measurement: Issues and Practice, 40, 49-52.

Gipps, S. (1999). Socio-cultural aspects of assessment. Review of Research in Education, 24, 355–392.

Ladson-Billings, G. (1995). Toward a theory of culturally relevant pedagogy. American Educational Research Journal32, 465-491.

Ladson-Billings, G., & Tate, W. F. (1995). Toward a critical race theory of education. Teachers College Record, 97, 47-68.

Lee, C. D. (1998). Culturally responsive pedagogy and performance-based assessment. The Journal of Negro Education67, 268-279.

Miller, A. & Albano, A. D. (2017, October). Content Camp: Ohio State’s collaborative, open test bank pilot. Paper presented at OpenEd17: The 14th Annual Open Education Conference, Anaheim, CA.

Moss, P. A. (1996). Enlarging the dialogue in educational measurement: Voices from interpretative research traditions. Educational Researcher, 25, 20-28.

Shepard, L. A. (2000). The role of assessment in a learning culture. Educational Researcher, 29, 4-14.

EMIP Commentaries on College Admission Tests and Social Responsibility by Koljatic, Silva, and Sireci

I’m sharing here my notes on a series of commentaries in press with the journal Educational Measurement: Issues and Practice (EMIP). The commentaries examine the topic of social responsibility (SR) in college admission testing, in response to the following focus article, where the authors challenge the testing industry to be more engaged in improving equity in education.

Koljatic, M., Silva, M., & Sireci, S. (in press). College admission tests and social responsibility. Educational Measurement: Issues and Practice. https://doi.org/10.1111/emip.12425.

I enjoyed reading the commentaries. They are thoughtful and well-written, represent a variety of perspectives on SR, and raise some valid concerns. For the most part, there is agreement that we can do better as a field, though there is disagreement on the specifics.

There are 14 articles, including mine. I’m going to list them alphabetically by last name of first author, and give a short summary of the main points. Full references are at the end.

1. Ackerman, The Future of College Admissions Tests

  • Ackerman defends the testing industry, saying we haven’t ignored SR so much as we’ve attended to what is becoming an outdated version of SR, one that valued merit over high socioeconomic status. We haven’t been complacent, just slow to change course as SR has evolved. This reframing serves to distribute the responsibility, but the main point from the focus article still stands, standardized testing is lagging and we need to pick up our feet.
  • Ackerman recommends considering tests of competence, perhaps something with criterion referencing, resembling Advanced Placement, though we still have to deal with differential access to the target test content.

2. Albano, Social Responsibility in College Admissions Requires a Reimagining of Standardized Testing

  • My article summarizes the debate around SR in admissions in the University of California (UC) over the past few years, with references to some key policy documents.
  • I critique the Nike analogy, pointing out how the testing industry is more similar to a manufacturer, building shoes according to specifications, than it is to a distributer. Nike could just as easily represent an admissions program. This highlights how SR in college admissions will require cooperation from multiple stakeholders.
  • The suggestions from the focus article for how we address SR just scratch the surface. Our goal should be to build standardized assessment systems that are as openly accessible and transparent as possible, optimally having all test content and item-level data available online.

3. Briggs, Comment on College Admissions Tests and Social Responsibility

  • Briggs briefly scrutinizes the Nike analogy, and then contrasts the technical, standard definition of fairness or lack of bias with the public interpretation of fairness as lack of differential impact, acknowledging that we’ve worked as a field to address the former but not so much the latter.
  • He summarizes research, including his own, indicating that although coaching may have a small effect in terms of score changes, admission officers may still act on small differences. This suggests inequitable test preparation shouldn’t be ignored.
  • Briggs also recommends we consider how college admissions improves going forward with optional or no testing. Recent studies show that diversity may increase slightly as a result. It remains to be seen how other admission variables will be interpreted and potentially manipulated in the absence of a standardized quantitative measure.

4. Camara, Negative Consequences of Testing and Admission Practices: Should Blame Be Attributed to Testing Organizations?

  • Camara highlights how disparate impact in admissions goes beyond testing into the admission process itself. Other applicant variables (eg, personal statements, GPA, letters of recommendation) also have limitations.
  • He also says the focus article fails to acknowledge how industry has already been responsive to SR concerns. Changes have been made as requested, but they are slow to implement, and sometimes they aren’t even utilized (eg, non-cognitive assessments, essay sections).

5. Franklin et al, Design Tests with a Learning Purpose

  • Franklin et al propose, in under two pages, that we design admission tests to serve two purposes at once, including 1) teaching, in addition to 2) measuring, which they refer to as the original purpose. Teaching via testing is accomplished via formative feedback that can guide test takers to remediation.
  • As an example, they reference a free and open-source testing system for college placement (https://daacs.net) that provides students with diagnostic information and learning resources.
  • This sort of idea came up in our conversations around admissions at the UC. As a substitute for the SAT, we considered the Smarter Balanced assessments (used for end-of-year K12 testing in California), which, in theory, could provide diagnostic information linked to content standards.
  • Measurement experts might say that when a test serves multiple purposes it risks serving none of them optimally. This assumes that there are limited resources for test development or that the multiple purposes involve competing interests and trade-offs, which may or may not actually be the case.

6. Geisinger, Social Responsibility, Fairness, and College Admissions Tests

  • Geisinger gives some historical context to the discussion of fairness and clarifies from the Standards for Educational and Psychological Testing (AERA, APA, & NCME, 2014) that the users of tests are ultimately responsible for their use.
  • He contrasts validity with the similar but more comprehensive utility theory from industrial/organizational psychology. Utility theory accounts for all of the costs and impacts of test use, and in this way it seems to overlap with what we call consequential validity.
  • Geisinger also recommends we expand DIF analysis to include external criterion measures. This idea also came up in our review of the SAT and alternatives in the UC.

7. Irribarra et al, Large-Scale Assessment and Legitimacy Beyond the Corporate Responsibility Model

  • Irribarra et al argue that admission testing is not a product or service but a public policy intervention, in which case, it’s reasonable to expect testing to have a positive impact. They don’t really justify this position or consider the alternatives.
  • The authors outline three strategies for increasing legitimacy of admission testing as policy intervention, including 1) increased transparency (in reporting), 2) adding value (eg, formative score interpretations), and 3) community participation (eg, having teachers as item writers and ambassadors to the community). These strategies align with the recommendations in other articles, including mine.

8. Klugman et al, The Questions We Should Be Asking About Socially Responsible College Admission Testing

  • This commentary provided lots of concrete ideas to discuss. I’ll probably need a separate post to elaborate.
  • In parsing the Nike analogy, Klugman et al note, as do other commentaries, that testing companies have less influence over test use than a distributor like Nike may have over its manufacturers. As a result, the testing industry may have less leverage for change. The authors also point out that the actual impacts of Nike accepting SR are unclear. We shouldn’t assume that there has been sustained improvement in manufacturing, as there is evidence that problems persist, and it could be that “Nike leadership stomps out scandals as they pop up” (p 1).
  • Klugman et al cite a third flaw in the Nike analogy, and I would push back on this one. They say that, whereas consumers pressured for change with Nike, the consumers of tests (the colleges and universities who use them) “are not demanding testing agencies dramatically reenvision their products and how they are used” (p 2). While I agree that higher education is in the best position to ask for a better testing product, I disagree that they’ve neglected to do so. Concerns have been raised over the years and the testing industry has responded. Camara and Briggs both note this in their commentaries, and Camara lists out a few examples, as do commentaries from ACT and College board (below).
  • That last point might boil down to what the authors meant by “dramatically reenvision” in the quote above. It’s unclear what a dramatic reenvisioning would entail. Maybe the authors would accept that changes have been made, but that they haven’t been dramatic enough.
  • Next, Klugman et al argue that corporate SR for testing companies is “ill-defined and undesirable” (p 2). The gist is that SR would be complicated in practice because reducing score gaps would conflict with existing intended uses of test scores. I was hoping for more discussion here but they move on quickly to a list of recommendations for improving testing and the admissions process itself. Some of these recommendations appear in different forms in other commentaries (focus on content-related validity and criterion referencing, reduce the costs of testing, consider how admissions changes when we don’t use tests), and there was one I didn’t see elsewhere (be careful of biases coded into historical practices and datasets that are used to build new tools and predictive models).

9. Koretz, Response to Koljatic et al: Neither a Persuasive Critique of Admissions Testing Nor Practical Suggestions for Improvement

  • As the title suggests, Koretz is mostly critical of the focus article in his commentary. He reviews its limitations and concludes that it’s largely unproductive. He says the article missteps with the Nike analogy, and that it doesn’t: clarify the purposes and target constructs of admission testing, acknowledge the research showing a lack of bias, give evidence of how testing causes inequities, or provide clear or useful suggestions for improving the situation.
  • Koretz also questions the general negative tone of the focus article, a tone that is evident in key phrases that feel unnecessarily cynical (that’s my interpretation of his point) as well as a lack of support for some of its primary claims (insufficient or unclear references).

10. Lyons et al, Evolution of Equity Perspectives on Higher Education Admissions Testing: A Call for Increased Critical Consciousness

  • Lyons et al summarize how perspectives on admission testing have progressed over time from a) emphasizing aptitude over student background to b) emphasizing achievement over aptitude, and now to c) an awareness of opportunity gaps and d) recognition of more diverse knowledge and skills.
  • The authors argue that systematic group differences in test scores are justification for removing or limiting tests as gatekeepers to admission. They don’t address the broader issue of the admission process itself being a gatekeeper to admission.
  • They end (p 3) with suggestions for expanding selection variables to include “passion and commitment, adaptability, short-term, and long-term goals, ability to build connections and a sense of belonging, cultural competence, ability to navigate adversity, and propensity for leadership and collective responsibility.” They also concede that “Academic achievement, as measured by standardized tests, may be useful in playing a limited, compensatory role, but always in partnership with divergent measures that value and represent multiple ways of knowing, doing, and being.”
  • The authors don’t acknowledge that testing companies are already exploring ways to measure these other variables (discussed, eg, in the Mattern commentary), and admissions programs already try to account for them on their own (eg, via personal statements and letters of recommendation). It’s unclear if the authors are suggesting we need new standardized measures of these variables.

11. Mattern et al, Reviving the Messenger: A Response to Koljatic et al

  • The authors, all from ACT, respond to focus article suggestions that the testing industry 1) review construct irrelevance and account for opportunity to learn, 2) explore new ways of testing to reduce score gaps, and 3) increase transparency and accountability generally.
  • They discuss how the testing industry is already addressing 1) by, eg, aligning tests to K12 curricula, asking college instructors via survey what they expect in new students, and documenting opportunity to learn while acknowledging that it has impacts beyond testing.
  • They interpret 2) as a call from the focus article to redesign admission tests themselves so that they produce “predetermined outcomes,” which Mattern et al reject as “unscientific” (p 2). I don’t know that the focus article meant to say that the tests should be modified to hide group differences, but I can see how their recommendations were open to interpretation. Rather than change the tests, Mattern et al recommend considering less traditional variables like social and emotional learning.
  • Finally, the authors respond to 3) with examples of their commitment to transparency, accountability, and equity. The list is not short, and ACT’s level of engagement seems pretty reasonable, more than they’re given credit for in the other commentaries.

12. Randall, From Construct to Consequences: Extending the Notion of Social Responsibility

  • Randall advocates for an anti-racist approach to standardized testing, in line with her EMIP article from earlier this year (Randall, 2021), wherein we reconsider how our current construct definitions and measurement methods sustain white supremacy.
  • Randall questions the familiar comparison of standardized testing to a doctor or thermometer, pointing out that decision-making in health care isn’t without flaws or racist outcomes, and concluding that the admission testing industry has “failed to… see itself as anything other than some kind of neutral ruler/diagnostic tool,” and that “the possibility that the test is wrong” is something that “many in the admission testing industry are resistant to even considering” (p 1).
  • I appreciate Randall’s critique of this analogy. I hadn’t scrutinized it in this way before, and can see how it oversimplifies the issue, granting to tests an objectivity and essential quality that they don’t deserve. That said, Randall seems to oversimplify the issue in the opposite direction, without accounting for the ways in which industry does now acknowledge and attempt to address the limitations of testing.
  • Randall recommends that, instead of college readiness, we label the target construct of admission testing as “the knowledge, values, and ways of understanding of the white dominant class” (p 2). I don’t know well the critical theory literature behind recommendations like this and I’m curious how it squares with research showing that achievement gaps are largely explained by school poverty. It would be helpful to see examples of test content, in something like the released SAT questions, that uniquely privilege a student’s whiteness separately from their wealth background.

13. Walker, Achieving Educational Equity Requires a Communal Effort

  • Walker summarizes points of agreement with the focus article, eg, standard practices are only a starting point for navigating SR in testing, and testing companies can be more engaged in promoting fair test use, including by collaborating with advocacy groups. Walker highlights the state of Hawaii as an example, as they implemented standards and assessments that better align with their Hawaiian language immersion schools.
  • He also critiques and extends the arguments made in the focus article, saying that our traditional practice in test development and psychometrics “represents a mainstream viewpoint that generally fails to account for the many social and cultural aspects of learning and expression” (p 1). Referring to the Standards for Educational and Psychological Testing (AERA, APA, & NCME, 2014), he says, “the Standards can only advocate for a superficially inclusive approach to pursuing an exclusive agenda. Thus, any test based on those standards will be woefully inadequate with respect to furthering equity” (p 2).
  • Walker, referring to a report from the UC, argues that admission tests already map onto college readiness, as evidenced in part by correlations between test scores and college grades. Critics would note here that test scores capitalize on the predictiveness of socioeconomic status, and, in the UC at least, they do so more than high school grades do (Geiser, 2020). Test scores measure more socioeconomic readiness than we might realize.
  • Walker concludes that equity will require much more than SR in testing. He says, “Any attempt to reform tests independently of the educational system would simply result in tests that no longer reflected what was happening in schools and that had lost relevance” (p 2). In addition to testing, we need to reevaluate SR in the education system itself. He shares a lot of good examples and references here (eg, on classroom equity and universal design).
  • Finally, Walker refers to democratic testing (Shohamy, 2021), a term I hadn’t heard of. He says, “testing should be a democratic process, conducted in collaboration and cooperation with those tested” (p 2). Further, “everyone involved in testing must assume responsibility for tests and their uses, instead of leaving all the responsibility in the hands of a powerful few” (p 2). This point resonates well with my recommendations for less secrecy and security in testing, and more access, partnership, and transparency.

14. Way, An Evidence-Based Response to College Admission Tests and Social Responsibility

  • The authors, both from College Board, highlight how the company is already working to address inequities through fee waivers, free test prep via Khan Academy, the Landscape tool, etc. By omitting this information, the focus article misrepresents industry.
  • Regarding the focus article’s claim that industry isn’t sufficiently committed to transparency and accountability, the authors reply, “There is no clear explanation provided as to what they are referring to and the claim is simply not based on facts.”
  • The authors recommend that the National Council on Measurement in Education form a task force to move this work forward.

Summary

Here are a few themes I see in the focus article and commentaries.

  1. The focus article and some of the commentaries don’t really acknowledge what has already being done in admission testing with respect to SR. Perhaps this was omitted in the interest of space, but, ideally, a call for action would start with a review of existing efforts (some of which are listed above) and then present areas for improvement.
  2. The Nike analogy has some flaws, as can be expected with any analogy. It still seems instructive though, especially when we stretch it a bit and consider reversing the roles.
  3. As for next steps, there’s some consensus that we need increased transparency and more input, from diverse stakeholders, in the test development process.
  4. Improving SR in admission testing and beyond, so as to reduce educational inequities, will be complicated, and has implications for our education system in general. Though not directly addressed in the articles, the more diverging viewpoints (testing is pretty good vs inherently unjust) probably arise from a lack of consensus on broader issues like meritocracy, the feasibility of objective measurement, and the role of educational standards.

I’m curious to see how Koljatic, Silva, and Sireci bring the discussion together in a response, which I believe is forthcoming in EMIP.

References for Commentaries

Ackerman, P L. (in press). The future of college admissions tests. Educational Measurement: Issues and Practice. https://doi.org/10.1111/emip.12456

Albano, A. D. (in press). Social responsibility in college admissions requires a reimagining of standardized testing. Educational Measurement: Issues and Practice. https://doi.org/10.1111/emip.12451

Briggs, D. C. (in press). Comment on college admissions tests and social responsibility. Educational Measurement: Issues and Practice. https://doi.org/10.1111/emip.12455

Camara, W. J. (in press). Negative consequences of testing and admission practices: Should blame be attributed to testing organizations? Educational Measurement: Issues and Practice. https://doi.org/10.1111/emip.12448

Franklin, D. W., Bryer, J., Andrade, H. L., & Liu, A. M. (in press). Design tests with a learning purpose. Educational Measurement: Issues and Practice. https://doi.org/10.1111/emip.12457

Geisinger, K. F. (in press). Social responsibility, fairness, and college admissions tests. Educational Measurement: Issues and Practice. https://doi.org/10.1111/emip.12450

Irribarra, D. T., & Santelices, M. V. (in press). Large-scale assessment and legitimacy beyond the corporate responsibility model. Educational Measurement: Issues and Practice. https://doi.org/10.1111/emip.12460

Klugman, E. M., An, L., Himmelsbach, Z., Litschwartz, S. L., & Nicola, T. P. (in press). The questions we should be asking about socially responsible college admission testing. Educational Measurement: Issues and Practice. https://doi.org/10.1111/emip.12449

Koretz, D. (in press). Response to Koljatic et al.: Neither a persuasive critique of admissions testing nor practical suggestions for improvement. Educational Measurement: Issues and Practice. https://doi.org/10.1111/emip.12454

Lyons, S., Hinds, F., & Poggio, J. (in press). Evolution of equity perspectives on higher education admissions testing: A call for increased critical consciousness. Educational Measurement: Issues and Practice. https://doi.org/10.1111/emip.12458

Mattern, K., Cruce, T., Henderson, D., Gridiron, T., Casillas, A., & Taylor, M. (in press). Reviving the messenger: A response to Koljatic et al. (2021). Educational Measurement: Issues and Practice. https://doi.org/10.1111/emip.12459

Randall, J. (in press). From construct to consequences: Extending the notion of social responsibility. Educational Measurement: Issues and Practice. https://doi.org/10.1111/emip.12452

Walker, M. E. (in press). Achieving educational equity requires a communal effort. Educational Measurement: Issues and Practice. https://doi.org/10.1111/emip.12465

Way, W. D., & Shaw, E. J. (in press). An evidence-based response to college admission tests and social responsibility. Educational Measurement: Issues and Practice. https://doi.org/10.1111/emip.12467

Other References

American Educational Research Association, American Psychological Association, & National Council on Measurement in Education (2014). Standards for educational and psychological testing. Lanham, MD: American Educational Research Association.

Geiser, S. (2020). SAT/ACT Scores, High School GPA, and the Problem of Omitted Variable Bias: Why the UC Taskforce’s Findings are Spurious. https://cshe.berkeley.edu/publications/satact-scores-high-school-gpa-and-problem-omitted-variable-bias-why-uc-taskforce’s

Randall, J. (2021). “Color-neutral” is not a thing: Redefining construct definition and representation through a justice-oriented critical antiracist lens. Educational Measurement: Issues and Practice. https://doi.org/10.1111/emip.12429

Shohamy, E. (2001). Democratic assessment as an alternative. Language Testing, 18(4), 373–391.