Metadata
Title
Getting it right
Category
general
UUID
d66a2369e3f24cea94d6a64a74e54f81
Source URL
https://www.lboro.ac.uk/research/spotlights/marking-errors/
Parent URL
https://www.lboro.ac.uk/research/spotlights/
Crawl Time
2026-03-24T00:02:16+00:00
Rendered Raw Markdown

Getting it right

Source: https://www.lboro.ac.uk/research/spotlights/marking-errors/ Parent: https://www.lboro.ac.uk/research/spotlights/

Transforming educational assessment with comparative judgement

The marking of student work is plagued by human error which has caused three systemic problems, weakening public confidence in qualifications.

To reduce marking error, assessments often comprise short, piecemeal questions – fragmenting curricular knowledge and encouraging rote learning. Marking error also means that grades are unreliable, unfairly hindering the life chances of some students. Moreover, marking error means standards over time and across awarding bodies cannot be accurately evaluated or maintained.

Our research into comparative judgement (CJ) led to the development of an assessment technique that eliminates marking error and avoids these problems. The method is widely used across the globe to support marking and enhance learning.

Our impact

Qualifications regulator Ofqual routinely applies our research

No More Marking Ltd routinely applies our research

The research

Our research into marking error began almost a decade ago with a study investigating why GCSE mathematics was not fit for purpose.

We found that having to rapidly mark thousands of exam scripts resulted in question papers almost entirely comprising short-form questions that minimise marking error, but reduce validity.

Our comparative judgement (CJ) technique for assessing mathematical knowledge eliminates marking itself. Instead, subject experts decide which of two presented scripts is better in terms of a high-level criteria such as conceptual understanding.

Many such binary decisions are collected from several assessors and then fitted to the Bradley-Terry model to produce a score for each student.

Further studies have proved the reliability of the CJ system as an assessment tool when applied to open-ended mathematics assessments which better demonstrate a student’s understanding. What’s more, we demonstrated the reliability of CJ assessment outcomes across age groups, topics, institutions and jurisdictions.

We then developed a CJ-based technique for investigating standards over time and across equivalent qualifications. This revealed that standards in A-level Mathematics have declined since the 1960s. This research received much national media coverage – and the British Educational Research Journal’s Editor’s Choice Award 2017.

Research conducted at Loughborough has directly impacted the examination system in England and Wales, making it fairer and impacting around 1.1 million candidates per year. Our comparability and awarding work based on Loughborough’s research is crucial for ensuring public acceptance of the examination system.

Dr Michelle Meadows Deputy Chief Regulator - Ofqual

### Decline in A-level Mathematics standards since 1960s

To date, 2,227 schools in 27 countries have subscribed to No More Marking’s services

Research funders

Meet the experts

Dr Lara Alcock

Reader in Mathematics Education

Dr Colin Foster

Reader in Mathematics Education

Professor Camilla Gilmore

Professor of Mathematical Cognition

Professor Matthew Inglis

Professor of Mathematical Cognition

Dr Ian Jones

Reader in Educational Assessment