Quantitatively ranking incorrect responses to multiple-choice questions using item response theory

Trevor I. Smith, Kyle J. Louis, Bartholomew J. Ricci, Nasrine Bendjilali

Research output: Contribution to journalArticlepeer-review

Abstract

Research-based assessment instruments (RBAIs) are ubiquitous throughout both physics instruction and physics education research. The vast majority of analyses involving student responses to RBAI questions have focused on whether or not a student selects correct answers and using correctness to measure growth. This approach often undervalues the rich information that may be obtained by examining students’ particular choices of incorrect answers. In the present study, we aim to reveal some of this valuable information by quantitatively determining the relative correctness of various incorrect responses. To accomplish this, we propose an assumption that allow us to define relative correctness: students who have a high understanding of Newtonian physics are likely to answer more questions correctly and also more likely to choose better incorrect responses, than students who have a low understanding. Analyses using item response theory align with this assumption, and Bock’s nominal response model allows us to uniquely rank each incorrect response. We present results from over 7,000 students’ responses to the Force and Motion Conceptual Evaluation.

Original languageEnglish (US)
JournalUnknown Journal
StatePublished - Jun 2 2019

All Science Journal Classification (ASJC) codes

  • General

Fingerprint Dive into the research topics of 'Quantitatively ranking incorrect responses to multiple-choice questions using item response theory'. Together they form a unique fingerprint.

Cite this