Navigating uncertainties of introducing artificial intelligence (AI) in healthcare: The role of a Norwegian network of professionals
Peer reviewed, Journal article
Published version
Permanent lenke
https://hdl.handle.net/11250/3110474Utgivelsesdato
2023Metadata
Vis full innførselSamlinger
Originalversjon
10.1016/j.techsoc.2023.102432Sammendrag
Artificial Intelligence (AI) technologies are expected to solve pressing challenges in healthcare services world-
wide. However, the current state of introducing AI is characterised by several issues complicating and delaying
their deployments. These issues concern topics such as ethics, regulations, data access, human trust, and limited
evidence of AI technologies in real-world clinical settings. They further encompass uncertainties, for instance,
whether AI technologies will ensure equal and safe patient treatment or whether the AI results will be accurate
and transparent enough to establish user trust. Collective efforts by actors from different backgrounds and af-
filiations are required to navigate this complex landscape. This article explores the role of such collective efforts
by investigating how an informally established network of professionals works to enable AI in the Norwegian
public healthcare services. The study takes a qualitative longitudinal case study approach and is based on data
from non-participant observations of digital meetings and interviews. The data are analysed by drawing on
perspectives and concepts from Science and Technology Studies (STS) dealing with innovation and socio-
technical change, where collective efforts are conceptualised as actor mobilisation. The study finds that in the
case of the ambiguous sociotechnical phenomenon of AI, some of the uncertainties related to the introduction of
AI in healthcare may be reduced as more and more deployments occur, while others will prevail or emerge.
Mobilising spokespersons representing actors not yet a part of the discussions, such as AI users or researchers
studying AI technologies in use, can enable a ‘stronger’ hybrid knowledge production. This hybrid knowledge is
essential to identify, mitigate and monitor existing and emerging uncertainties, thereby ensuring sustainable AI
deployments.