Motivations for using the item response theory nominal response model to rank responses to multiple-choice items

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Several recent studies have employed item response theory (IRT) to rank incorrect responses to commonly used research-based multiple-choice assessments. These studies use Bock's nominal response model (NRM) for applying IRT to categorical (nondichotomous) data, but the response rankings only utilize half of the parameters estimated by the model. We present a mathematical argument for why this practice of using half of the NRM parameters when ranking responses is appropriate based on the primary question of multiple-choice tests: How can we use students' responses to test items to estimate their overall knowledge levels? We provide additional motivation for this practice by recognizing the similarities between Bock's NRM and the probability function of the canonical ensemble with degenerate energy states. As physicists often do, we exploit these mathematical similarities to gain new insights into the meaning of the IRT parameters and a richer understanding of the relationship between these parameters and student knowledge.

Original languageEnglish (US)
Article number010133
JournalPhysical Review Physics Education Research
Volume18
Issue number1
DOIs
StatePublished - Jun 2022

All Science Journal Classification (ASJC) codes

  • Education
  • General Physics and Astronomy

Fingerprint

Dive into the research topics of 'Motivations for using the item response theory nominal response model to rank responses to multiple-choice items'. Together they form a unique fingerprint.

Cite this