What Are Humans Doing in the Loop? Co-Reasoning and Practical Judgment When Using Machine Learning-Driven Decision Aids
Peer reviewed, Journal article
Published version
Date
2024Metadata
Show full item recordCollections
- Publikasjoner fra Cristin [3506]
- SPS - Documents [431]
Abstract
Within the ethical debate on Machine Learning-driven decision support systems (ML
_
CDSS),
notions such as “human in the loop” or “meaningful human control” are often cited as being
necessary for ethical legitimacy. In addition, ethical principles usually serve as the major point
of reference in ethical guidance documents, stating that conflicts between principles need to
be weighed and balanced against each other. Starting from a neo-Kantian viewpoint inspired
by Onora O'Neill, this article makes a concrete suggestion of how to interpret the role of the
“human in the loop” and to overcome the perspective of rivaling ethical principles in the
evaluation of AI in health care. We argue that patients should be perceived as “fellow workers”
and epistemic partners in the interpretation of ML
_
CDSS outputs. We further highlight that a
meaningful process of integrating (rather than weighing and balancing) ethical principles is
most appropriate in the evaluation of medical AI.