Vis enkel innførsel

dc.contributor.authorKhadka, Rabindra
dc.contributor.authorJha, Debesh
dc.contributor.authorRiegler, Michael A.
dc.contributor.authorHicks, Steven
dc.contributor.authorThambawita, Vajira
dc.contributor.authorAli, Sharib
dc.contributor.authorHalvorsen, Pål
dc.date.accessioned2022-03-09T14:18:45Z
dc.date.available2022-03-09T14:18:45Z
dc.date.created2022-01-28T15:09:25Z
dc.date.issued2022-02-03
dc.identifier.issn0010-4825
dc.identifier.issn1879-0534
dc.identifier.urihttps://hdl.handle.net/11250/2984082
dc.description.abstractWidely used traditional supervised deep learning methods require a large number of training samples but often fail to generalize on unseen datasets. Therefore, a more general application of any trained model is quite limited for medical imaging for clinical practice. Using separately trained models for each unique lesion category or a unique patient population will require sufficiently large curated datasets, which is not practical to use in a real-world clinical set-up. Few-shot learning approaches can not only minimize the need for an enormous number of reliable ground truth labels that are labour-intensive and expensive, but can also be used to model on a dataset coming from a new population. To this end, we propose to exploit an optimization-based implicit model agnostic meta-learning (iMAML) algorithm under few-shot settings for medical image segmentation. Our approach can leverage the learned weights from diverse but small training samples to perform analysis on unseen datasets with high accuracy. We show that, unlike classical few-shot learning approaches, our method improves generalization capability. To our knowledge, this is the first work that exploits iMAML for medical image segmentation and explores the strength of the model on scenarios such as meta-training on unique and mixed instances of lesion datasets. Our quantitative results on publicly available skin and polyp datasets show that the proposed method outperforms the naive supervised baseline model and two recent few-shot segmentation approaches by large margins. In addition, our iMAML approach shows an improvement of 2%–4% in dice score compared to its counterpart MAML for most experiments.en_US
dc.description.sponsorshipD. Jha is funded by the PRIVATON project (#263 248) which is funded by Research Council of Norway (RCN). S. Ali is supported by the National Institute for Health Research (NIHR) Oxford Biomedical Research Centre (BRC). Our experiments were performed on the Experimental Infrastructure for Exploration of Exascale Computing (eX3) system, which is financially supported by RCN under contract 270 053.en_US
dc.language.isoengen_US
dc.publisherElsevieren_US
dc.relation.ispartofseriesComputers in Biology and Medicine;Volume 143, April 2022, 105227
dc.rightsNavngivelse 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/deed.no*
dc.subjectMeta-learningen_US
dc.subjectFew-shot learningen_US
dc.subjectColonoscopyen_US
dc.subjectPolyp segmentationen_US
dc.subjectWireless capsule endoscopyen_US
dc.subjectSkin lesion segmentationen_US
dc.titleMeta-learning with implicit gradients in a few-shot setting for medical image segmentationen_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionpublishedVersionen_US
dc.rights.holder© 2022 The Authorsen_US
dc.source.articlenumber105227en_US
cristin.ispublishedfalse
cristin.fulltextoriginal
cristin.qualitycode1
dc.identifier.doihttps://doi.org/10.1016/j.compbiomed.2022.105227
dc.identifier.cristin1992702
dc.source.journalComputers in Biology and Medicineen_US
dc.source.volume143en_US
dc.source.issue143en_US
dc.source.pagenumber1-10en_US
dc.relation.projectNorges forskningsråd: 263248en_US


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Navngivelse 4.0 Internasjonal
Med mindre annet er angitt, så er denne innførselen lisensiert som Navngivelse 4.0 Internasjonal