Visualizing Conditional Standard Error in the GRE

Below is some R code for visualizing measurement error across the GRE score scale, plotted against percentiles. Data come from an ETS report at https://www.ets.org/s/gre/pdf/gre_guide.pdf.

The plot shows conditional standard error of measurement (SEM) for GRE verbal scores. SEM is the expected average variability in scores attributable to random error in the measurement process. For details, seeĀ my last post.

Here, the SEM is conditional on GRE score, with more error evident at lower verbal scores, and less at higher scores where measurement is more precise. As with other forms of standard error, the SEM can be used to build confidence intervals around an estimate. The plot has ribbons for 68% and 95% confidence intervals, based on +/- 1 and 2 SEM.

# Load ggplot2 package
library("ggplot2")

# Put percentiles into data frame, pasting from ETS
# report Table 1B
pct <- data.frame(gre = 170:130,
matrix(c(99, 96, 99, 95, 98, 93, 98, 90, 97, 89,
  96, 86, 94, 84, 93, 82, 90, 79, 88, 76, 86, 73,
  83, 70, 80, 67, 76, 64, 73, 60, 68, 56, 64, 53,
  60, 49, 54, 45, 51, 41, 46, 37, 41, 34, 37, 30,
  33, 26, 29, 23, 26, 19, 22, 16, 19, 13, 16, 11,
  14, 9, 11, 7, 9, 6, 8, 4, 6, 3, 4, 2, 3, 2, 2,
  1, 2, 1, 1, 1, 1, 1, 1, 1),
  nrow = 41, byrow = T))

# Add variable names
colnames(pct)[2:3] <- c("pct_verbal", "pct_quant")

# Subset and add conditional SEM from Table 5E
sem <- data.frame(pct[c(41, seq(36, 1, by = -5)), ],
  sem_verbal = c(3.9, 3.5, 2.9, 2.5, 2.3, 2.1, 2.1,
    2.0, 1.4),
  sem_quant = c(3.5, 2.9, 2.4, 2.2, 2.1, 2.0, 2.1,
    2.1, 1.0),
  row.names = NULL)

# Plot percentiles on x and GRE on y with
# error ribbons
ggplot(sem, aes(pct_verbal, gre)) +
  geom_ribbon(aes(ymin = gre - sem_verbal * 2,
    ymax = gre + sem_verbal * 2),
    fill = "blue", alpha = .2) +
  geom_ribbon(aes(ymin = gre - sem_verbal,
    ymax = gre + sem_verbal),
    fill = "red", alpha = .2) +
  geom_line()

Confidence Intervals in Measurement vs Political Polls

In class this week we covered reliability and went through some examples of how measurement error, the opposite of reliability, can be converted into a standard error for building confidence intervals (CI) around test scores. Students are often surprised to learn that, despite a moderate to strong reliability coefficient, a test can still introduce an unsettling amount of error into results.

Measurement

Here’s an example from testing before I get to error in political polling. The GRE verbal reasoning test has an internal consistency reliability of 0.92, with associated standard error of measurement (SEM) of 2.4 (see Table 5A in this ETS report).

Let’s say you get a score of $X = 154$ on the verbal reasoning test. This puts you in the 64th percentile among the norming sample (Table 1B). We can build a CI around your score as

$$CI = X \pm SEM \times z$$

or

$$CI = 154 \pm 2.4 \times 1.96$$

where the z of 1.96 comes from the unit normal curve.

After rounding, we have a range of about 10 points within which we’re 95% confident your true score should fall. That’s 154 – 4.3 = 149.7 at the bottom (41st percentile after rounding) and 154 + 4.3 = 158.3 at the top (83rd percentile after rounding).

I’ll leave it as an exercise for you to run the same calculations on the analytical writing component of the GRE, which has a reliability of 0.86 and standard error of 0.32. In either case, the CI will capture a significant chunk of scores, which calls into question the utility of tests like the GRE for comparisons among individuals.

I should mention that the GRE is based on item response theory, which presents error as a function of the construct being measured, where the SEM and CI would vary over the score scale. The example above is simplified to a single overall reliability and SEM.

Polling

Moving on to political polls, Monmouth University is reporting the following results for democratic candidate preference from a phone poll conducted this week with 503 prospective voters in New Hampshire (full report here).

  1. Sanders with 24%
  2. Buttigieg with 20%
  3. Biden with 17%
  4. Warren with 13%

This is the ranking for the top four candidates. Percentages decrease for the remaining choices.

Toward the end of the article, the margin of error is reported as 4.4 percentage points. This was probably found based on a generic standard error (SE), calculated as

$$SE = \frac{\sqrt{p \times q}}{\sqrt{n}}$$

or

$$\frac{\sqrt{.5 \times .5}}{\sqrt{503}}$$

where p is the proportion (percentage rating / 100) that produces the largest possible variability and SE, and q = 1 – p. This gives us SE = 0.022 or 2.2%.

The 4.4, found with $SE \times 1.96$, is only half of the confidence interval. So, we’re 95% confident that the actual results for Sanders fall between 24 – 4.4 = 19.6% and 24 + 4.4 = 28.4%, a range which captures the result for Buttigieg.

All of the point differences for adjacent candidates in the rankings, which are currently being showcased by major news outlets, are within this margin error.

Note that we could calculate SE and confidence intervals that are specific to the percentages for each candidate. For Sanders we get an SE of 1.9%, for Buttigieg we get 1.8%. We could also use statistical tests to compare points more formally. Whatever the approach, we need to be more clear about the impacts of sampling error and discuss results like these in context.

Should We Drop the SAT/ACT as Requirements for Admissions?

California is reconsidering the role of tests like the SAT and ACT in its college admissions. Around 1,000 other colleges have already gone test-optional according to fairtest.org, but a shift for California would be big news, considering the size of the state university systems, which combined enrolled over 700,000 students for fall 2018.

I’m trying to get up to speed on this somewhat controversial issue. My research in testing focuses mainly on development and validation at the item level, and I’m less familiar with validity research on admissions policies and the broader consequences of test use in this area.

This week, I’ve gone through the following documents, all available online.

These documents seem to capture the gist of the debate, which centers on a few key issues. I’ll summarize here and then dig deeper in future posts.

Those in favor of norm-referenced admissions tests argue that the tests contribute to predicting undergraduate performance above and beyond other admissions variables like high school GPA and criterion-referenced tests, and they do so in a standardized way, with proctored administration, and using metrics that are independent of program or state.

Those in favor of dropping admissions tests, or making them optional, argue that the tests are more reflective of group differences than are other admissions variables. The costs, in terms of potential for bias, outweigh the benefits, in terms of incremental increases in predictive power.

In the end, the main question is, do we need a standardized measure of general content in the admissions process?

If so, what other options meet this need, and are available on an international scale, but don’t suffer from the same limitations as the SAT and ACT? Alternatively, is there room for improvement in current norm-referenced tests?

If not, how do we address limitations in the remaining admissions metrics, some of which may also be susceptible to misuse?

Demo Code from Recent Paper in APM

A colleague and I recently published a paper in Applied Psychological Methods titled Linking With External Covariates: Examining Accuracy by Anchor Type, Test Length, Ability Difference, and Sample Size. A pre-print copy is available here.

As the title suggests, we looked at some psychometric situations wherein the process of linking measurement scales could benefit from external information. Here’s the abstract.

Research has recently demonstrated the use of multiple anchor tests and external covariates to supplement or substitute for common anchor items when linking and equating with nonequivalent groups. This study examines the conditions under which external covariates improve linking and equating accuracy, with internal and external anchor tests of varying lengths and groups of differing abilities. Pseudo forms of a state science test were equated within a resampling study where sample size ranged from 1,000 to 10,000 examinees and anchor tests ranged in length from eight to 20 items, with reading and math scores included as covariates. Frequency estimation linking with an anchor test and external covariate was found to produce the most accurate results under the majority of conditions studied. Practical applications of linking with anchor tests and covariates are discussed.

The study is somewhat novel in its use of resampling at both the person and item levels. The result is a different sample of test takers taking a different sample of items at each study replication. I created an Rmarkdown file (saved as txt) that demonstrates the process for a reduced set of conditions.

multi-anchor-demo.txt
multi-anchor-demo.html

Getting Things Started

This is the first blog post on my new academic site. The main purpose of the site is to share educational and psychological measurement info and resources developed through my teaching and research.

My intro measurement textbook is available in HTML and PDF formats at https://www.thetaminusb.com/intro-measurement-r/. The book is designed for advanced undergraduate or beginner graduate courses in the theory and applications of measurement in education and psychology. Instructions and examples are given throughout on conducting psychometric analyses in R. If you’d like to contribute, email me or see the github repository at https://github.com/talbano/intro-measurement.

Note that the book is going to be updated in December, 2018 with revisions to the chapters on factor analysis, validity, and test evaluation. A Spanish translation is also underway and should be ready for 2019 at https://www.thetaminusb.com/intro-measurement-r-sp/.

I’m also working on forums for questions and conversations around measurement topics, deriving from the book, and equating topics, deriving from my R package and documentation. Stay tuned for links.