The guts of assessment: a digital architecture for machine learning and analogue judgement

Research output: Contribution to journalJournal articleResearchpeer-review

Standard

The guts of assessment : a digital architecture for machine learning and analogue judgement. / Johnson, Mark; Saleh, Rafiq.

In: Interactive Learning Environments, 2023.

Research output: Contribution to journalJournal articleResearchpeer-review

Harvard

Johnson, M & Saleh, R 2023, 'The guts of assessment: a digital architecture for machine learning and analogue judgement', Interactive Learning Environments. https://doi.org/10.1080/10494820.2022.2135105

APA

Johnson, M., & Saleh, R. (2023). The guts of assessment: a digital architecture for machine learning and analogue judgement. Interactive Learning Environments. https://doi.org/10.1080/10494820.2022.2135105

Vancouver

Johnson M, Saleh R. The guts of assessment: a digital architecture for machine learning and analogue judgement. Interactive Learning Environments. 2023. https://doi.org/10.1080/10494820.2022.2135105

Author

Johnson, Mark ; Saleh, Rafiq. / The guts of assessment : a digital architecture for machine learning and analogue judgement. In: Interactive Learning Environments. 2023.

Bibtex

@article{a448d6f65e6f4ae9a5bc7e97f7fe3ce9,
title = "The guts of assessment: a digital architecture for machine learning and analogue judgement",
abstract = "Educational assessment is inherently uncertain, where physiological, psychological and social factors play an important role in establishing judgements which are assumed to be {"}absolute{"}. AI and other algorithmic approaches to grading of student work strip-out uncertainty, leading to a lack of inspectability in machine judgement and consequent problems of trust and reliability. The technique of Adaptive Comparative Judgement (ACJ), in focusing on small-stakes binary comparisons provides an alternative approach to dealing with uncertainty in grading. Rankings can be produced which codify uncertainty, rendering machine judgements inspectable. However, ACJ demands resources in terms of time and the number of people making comparisons. Machine learning trained to make binary comparisons can help to make the process of human comparison more efficient. In combining ACJ and AI, we argue that the result is an analogue-digital system where the physiological/analogue processes of assessment can be coordinated with digital services which steer human assessment to those judgements which are most uncertain requiring most human deliberation. Drawing on our design of such a system developed for medical judgements, we describe a general architecture for human-machine assessment in education and discuss its potential in bridging the gap between analogue human cognition and digital machine learning.",
keywords = "Comparative judgement, machine learning, assessment, analogue systems, ranking",
author = "Mark Johnson and Rafiq Saleh",
year = "2023",
doi = "10.1080/10494820.2022.2135105",
language = "English",
journal = "Interactive Learning Environments",
issn = "1049-4820",
publisher = "Taylor & Francis",

}

RIS

TY - JOUR

T1 - The guts of assessment

T2 - a digital architecture for machine learning and analogue judgement

AU - Johnson, Mark

AU - Saleh, Rafiq

PY - 2023

Y1 - 2023

N2 - Educational assessment is inherently uncertain, where physiological, psychological and social factors play an important role in establishing judgements which are assumed to be "absolute". AI and other algorithmic approaches to grading of student work strip-out uncertainty, leading to a lack of inspectability in machine judgement and consequent problems of trust and reliability. The technique of Adaptive Comparative Judgement (ACJ), in focusing on small-stakes binary comparisons provides an alternative approach to dealing with uncertainty in grading. Rankings can be produced which codify uncertainty, rendering machine judgements inspectable. However, ACJ demands resources in terms of time and the number of people making comparisons. Machine learning trained to make binary comparisons can help to make the process of human comparison more efficient. In combining ACJ and AI, we argue that the result is an analogue-digital system where the physiological/analogue processes of assessment can be coordinated with digital services which steer human assessment to those judgements which are most uncertain requiring most human deliberation. Drawing on our design of such a system developed for medical judgements, we describe a general architecture for human-machine assessment in education and discuss its potential in bridging the gap between analogue human cognition and digital machine learning.

AB - Educational assessment is inherently uncertain, where physiological, psychological and social factors play an important role in establishing judgements which are assumed to be "absolute". AI and other algorithmic approaches to grading of student work strip-out uncertainty, leading to a lack of inspectability in machine judgement and consequent problems of trust and reliability. The technique of Adaptive Comparative Judgement (ACJ), in focusing on small-stakes binary comparisons provides an alternative approach to dealing with uncertainty in grading. Rankings can be produced which codify uncertainty, rendering machine judgements inspectable. However, ACJ demands resources in terms of time and the number of people making comparisons. Machine learning trained to make binary comparisons can help to make the process of human comparison more efficient. In combining ACJ and AI, we argue that the result is an analogue-digital system where the physiological/analogue processes of assessment can be coordinated with digital services which steer human assessment to those judgements which are most uncertain requiring most human deliberation. Drawing on our design of such a system developed for medical judgements, we describe a general architecture for human-machine assessment in education and discuss its potential in bridging the gap between analogue human cognition and digital machine learning.

KW - Comparative judgement

KW - machine learning

KW - assessment

KW - analogue systems

KW - ranking

U2 - 10.1080/10494820.2022.2135105

DO - 10.1080/10494820.2022.2135105

M3 - Journal article

JO - Interactive Learning Environments

JF - Interactive Learning Environments

SN - 1049-4820

ER -

ID: 325706664