Rubric development and inter-rater reliability issues in assessing learning outcomes

James A. Newell, Kevin Dahm, Heidi L. Newell

Research output: Contribution to journalConference articlepeer-review

11 Scopus citations

Abstract

This paper describes the development of rubrics that help evaluate student performance and relate that performance directly to the educational objectives of the program. Issues in accounting for different constituencies, selecting items for evaluation, and minimizing time required for data analysis are discussed. Aspects of testing the rubrics for consistency between different faculty raters are presented, as well as a specific example of how inconsistencies were addressed. Finally, a consideration of the difference between course and programmatic assessment and the applicability of rubric development to each type is discussed.

Original languageEnglish (US)
Pages (from-to)1809-1816
Number of pages8
JournalASEE Annual Conference Proceedings
StatePublished - Dec 1 2002
Event2002 ASEE Annual Conference and Exposition: Vive L'ingenieur - Montreal, Que., Canada
Duration: Jun 16 2002Jun 19 2002

All Science Journal Classification (ASJC) codes

  • Engineering(all)

Fingerprint Dive into the research topics of 'Rubric development and inter-rater reliability issues in assessing learning outcomes'. Together they form a unique fingerprint.

Cite this