Segmentation stability of human head and neck cancer medical images for radiotherapy applications under de-identification conditions: Benchmarking data sharing and artificial intelligence use-cases

dc.contributorAalto-yliopistofi
dc.contributorAalto Universityen
dc.contributor.authorSahlsten, Jaakkoen_US
dc.contributor.authorWahid, Kareem A.en_US
dc.contributor.authorGlerean, Enricoen_US
dc.contributor.authorJaskari, Joelen_US
dc.contributor.authorNaser, Mohamed A.en_US
dc.contributor.authorHe, Renjieen_US
dc.contributor.authorKann, Benjamin H.en_US
dc.contributor.authorMäkitie, Anttien_US
dc.contributor.authorFuller, Clifton D.en_US
dc.contributor.authorKaski, Kimmoen_US
dc.contributor.departmentDepartment of Computer Scienceen
dc.contributor.departmentDepartment of Neuroscience and Biomedical Engineeringen
dc.contributor.groupauthorKaski Kimmo groupen
dc.contributor.organizationDepartment of Computer Scienceen_US
dc.contributor.organizationUniversity of Texas MD Anderson Cancer Centeren_US
dc.contributor.organizationHarvard Medical Schoolen_US
dc.contributor.organizationUniversity of Helsinkien_US
dc.date.accessioned2023-04-12T05:43:20Z
dc.date.available2023-04-12T05:43:20Z
dc.date.issued2023en_US
dc.descriptionFunding Information: This work was supported by the National Institutes of Health (NIH)/National Cancer Institute (NCI) through a Cancer Center Support Grant (CCSG; P30CA016672-44). MN is supported by an NIH grant (R01DE028290-01). KW is supported by a training fellowship from The University of Texas Health Science Center at Houston Center for Clinical and Translational Sciences TL1 Program (TL1TR003169), the American Legion Auxiliary Fellowship in Cancer Research, and an NIH/National Institute for Dental and Craniofacial Research (NIDCR) F31 fellowship (1 F31DE031502-01). CF received funding from the NIH/NIDCR (1R01DE025248-01/R56DE025248); an NIH/NIDCR Academic-Industrial Partnership Award (R01DE028290); the National Science Foundation (NSF), Division of Mathematical Sciences, Joint NIH/NSF Initiative on Quantitative Approaches to Biomedical Big Data (QuBBD) Grant (NSF 1557679); the NIH Big Data to Knowledge (BD2K) Program of the NCI Early Stage Development of Technologies in Biomedical Computing, Informatics, and Big Data Science Award (1R01CA214825); the NCI Early Phase Clinical Trials in Imaging and Image-Guided Interventions Program (1R01CA218148); an NIH/NCI Pilot Research Program Award from the UT MD Anderson CCSG Radiation Oncology and Cancer Imaging Program (P30CA016672); an NIH/NCI Head and Neck Specialized Programs of Research Excellence (SPORE) Developmental Research Program Award (P50CA097007); and the National Institute of Biomedical Imaging and Bioengineering (NIBIB) Research Education Program (R25EB025787). Publisher Copyright: Copyright © 2023 Sahlsten, Wahid, Glerean, Jaskari, Naser, He, Kann, Mäkitie, Fuller and Kaski.
dc.description.abstractBackground: Demand for head and neck cancer (HNC) radiotherapy data in algorithmic development has prompted increased image dataset sharing. Medical images must comply with data protection requirements so that re-use is enabled without disclosing patient identifiers. Defacing, i.e., the removal of facial features from images, is often considered a reasonable compromise between data protection and re-usability for neuroimaging data. While defacing tools have been developed by the neuroimaging community, their acceptability for radiotherapy applications have not been explored. Therefore, this study systematically investigated the impact of available defacing algorithms on HNC organs at risk (OARs). Methods: A publicly available dataset of magnetic resonance imaging scans for 55 HNC patients with eight segmented OARs (bilateral submandibular glands, parotid glands, level II neck lymph nodes, level III neck lymph nodes) was utilized. Eight publicly available defacing algorithms were investigated: afni_refacer, DeepDefacer, defacer, fsl_deface, mask_face, mri_deface, pydeface, and quickshear. Using a subset of scans where defacing succeeded (N=29), a 5-fold cross-validation 3D U-net based OAR auto-segmentation model was utilized to perform two main experiments: 1.) comparing original and defaced data for training when evaluated on original data; 2.) using original data for training and comparing the model evaluation on original and defaced data. Models were primarily assessed using the Dice similarity coefficient (DSC). Results: Most defacing methods were unable to produce any usable images for evaluation, while mask_face, fsl_deface, and pydeface were unable to remove the face for 29%, 18%, and 24% of subjects, respectively. When using the original data for evaluation, the composite OAR DSC was statistically higher (p ≤ 0.05) for the model trained with the original data with a DSC of 0.760 compared to the mask_face, fsl_deface, and pydeface models with DSCs of 0.742, 0.736, and 0.449, respectively. Moreover, the model trained with original data had decreased performance (p ≤ 0.05) when evaluated on the defaced data with DSCs of 0.673, 0.693, and 0.406 for mask_face, fsl_deface, and pydeface, respectively. Conclusion: Defacing algorithms may have a significant impact on HNC OAR auto-segmentation model training and testing. This work highlights the need for further development of HNC-specific image anonymization methods.en
dc.description.versionPeer revieweden
dc.format.extent10
dc.format.extent1-10
dc.format.mimetypeapplication/pdfen_US
dc.identifier.citationSahlsten, J, Wahid, K A, Glerean, E, Jaskari, J, Naser, M A, He, R, Kann, B H, Mäkitie, A, Fuller, C D & Kaski, K 2023, ' Segmentation stability of human head and neck cancer medical images for radiotherapy applications under de-identification conditions: Benchmarking data sharing and artificial intelligence use-cases ', Frontiers in Oncology, vol. 13, 1120392, pp. 1-10 . https://doi.org/10.3389/fonc.2023.1120392en
dc.identifier.doi10.3389/fonc.2023.1120392en_US
dc.identifier.issn2234-943X
dc.identifier.otherPURE UUID: c94de9b0-1797-4947-9761-f07fda6dee36en_US
dc.identifier.otherPURE ITEMURL: https://research.aalto.fi/en/publications/c94de9b0-1797-4947-9761-f07fda6dee36en_US
dc.identifier.otherPURE LINK: http://www.scopus.com/inward/record.url?scp=85150171233&partnerID=8YFLogxKen_US
dc.identifier.otherPURE FILEURL: https://research.aalto.fi/files/105521779/Segmentation_stability_of_human_head_and_neck_cancer_medical_images_for_radiotherapy_applications_under_de_identification_conditions.pdfen_US
dc.identifier.urihttps://aaltodoc.aalto.fi/handle/123456789/120404
dc.identifier.urnURN:NBN:fi:aalto-202304122722
dc.language.isoenen
dc.publisherFrontiers Research Foundation
dc.relation.ispartofseriesFRONTIERS IN ONCOLOGYen
dc.relation.ispartofseriesVolume 13en
dc.rightsopenAccessen
dc.subject.keywordanonymizationen_US
dc.subject.keywordartificial intelligence (AI)en_US
dc.subject.keywordautosegmentationen_US
dc.subject.keyworddefacingen_US
dc.subject.keywordhead and neck canceren_US
dc.subject.keywordmedical imagingen_US
dc.subject.keywordMRIen_US
dc.subject.keywordradiotherapyen_US
dc.titleSegmentation stability of human head and neck cancer medical images for radiotherapy applications under de-identification conditions: Benchmarking data sharing and artificial intelligence use-casesen
dc.typeA1 Alkuperäisartikkeli tieteellisessä aikakauslehdessäfi
dc.type.versionpublishedVersion

Files