Cardiff University | Prifysgol Caerdydd ORCA
Online Research @ Cardiff 
WelshClear Cookie - decide language by browser settings

Putting double marking to the test: a framework to assess if it is worth the trouble

Cannings-John, Rebecca Louise ORCID: https://orcid.org/0000-0001-5235-6517, Hawthorne, Kamila ORCID: https://orcid.org/0000-0001-9765-2926, Hood, Kerenza ORCID: https://orcid.org/0000-0002-5268-8631 and Houston, Helen 2005. Putting double marking to the test: a framework to assess if it is worth the trouble. Medical Education 39 (3) , pp. 299-308. 10.1111/j.1365-2929.2005.02093.x

Full text not available from this repository.

Abstract

Background  It is a challenge to assign a mark that accurately measures the quality of students' work in essay-type assessments that require an element of judgement and fairness by the markers. Double marking such assessments has been seen as a way of improving the reliability of the mark. The analysis approach often taken is to look for absolute agreement between markers instead of looking at all aspects of reliability. Aim  To develop an analytic process that will examine the components and meanings of reliability calculations that can be used for assessing the value of double marking a piece of work. Methods  An undergraduate case study assessment in General Practice was used as an illustration. Datasets of double marking were collected retrospectively for 1999−2000, and prospectively for 2002−03. An assessment of intermarker agreement and its effect on the reliability of the final mark for students was made, using methods dependent on the type of data collected and Generalisability Theory. Results and Conclusions  The data were used to illustrate how to interpret the results of Bland and Altman plots, anova tables and Cohen's kappa calculations. Generalisability Theory was used to show that, while there was reasonable agreement between markers, the reliability of the mark for the student was still only moderate, probably due to unexplained variability elsewhere in the process. Possible reasons for this variability are discussed. A flowchart of the decisions and actions needed to judge whether a piece of work should be double marked has been constructed.

Item Type: Article
Date Type: Publication
Status: Published
Schools: Medicine
Subjects: L Education > L Education (General)
R Medicine > R Medicine (General)
Uncontrolled Keywords: education; medical; undergraduate standards; quality control; educational measurement; educational standards; reproducibility of results; ANOVA.
Publisher: Wiley
ISSN: 1365-2923
Last Modified: 25 Oct 2022 09:45
URI: https://orca.cardiff.ac.uk/id/eprint/59878

Citation Data

Cited 13 times in Scopus. View in Scopus. Powered By Scopus® Data

Actions (repository staff only)

Edit Item Edit Item