Covid-transformer: Interpretable covid-19 detection using vision transformer for healthcare
Loading...
Access rights
openAccess
publishedVersion
URL
Journal Title
Journal ISSN
Volume Title
A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä
This publication is imported from Aalto University research portal.
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Date
Department
Major/Subject
Mcode
Degree programme
Language
en
Pages
14
Series
International Journal of Environmental Research and Public Health, Volume 18, issue 21
Abstract
In the recent pandemic, accurate and rapid testing of patients remained a critical task in the diagnosis and control of COVID-19 disease spread in the healthcare industry. Because of the sudden increase in cases, most countries have faced scarcity and a low rate of testing. Chest X-rays have been shown in the literature to be a potential source of testing for COVID-19 patients, but manually checking X-ray reports is time-consuming and error-prone. Considering these limitations and the advancements in data science, we proposed a Vision Transformer-based deep learning pipeline for COVID-19 detection from chest X-ray-based imaging. Due to the lack of large data sets, we collected data from three open-source data sets of chest X-ray images and aggregated them to form a 30 K image data set, which is the largest publicly available collection of chest X-ray images in this domain to our knowledge. Our proposed transformer model effectively differentiates COVID-19 from normal chest X-rays with an accuracy of 98% along with an AUC score of 99% in the binary classification task. It distinguishes COVID-19, normal, and pneumonia patient’s X-rays with an accuracy of 92% and AUC score of 98% in the Multi-class classification task. For evaluation on our data set, we fine-tuned some of the widely used models in literature, namely, EfficientNetB0, InceptionV3, Resnet50, MobileNetV3, Xception, and DenseNet-121, as baselines. Our proposed transformer model outperformed them in terms of all metrics. In addition, a Grad-CAM based visualization is created which makes our approach interpretable by radiologists and can be used to monitor the progression of the disease in the affected lungs, assisting healthcare.Description
Funding Information: Funding: This work was supported by the Deputyship for Research & Innovation, Ministry of Education in Saudi Arabia for funding this research work through the project number 959. Publisher Copyright: © 2021 by the authors. Licensee MDPI, Basel, Switzerland.
Other note
Citation
Shome, D, Kar, T, Mohanty, S N, Tiwari, P, Muhammad, K, Altameem, A, Zhang, Y & Saudagar, A K J 2021, 'Covid-transformer : Interpretable covid-19 detection using vision transformer for healthcare', International Journal of Environmental Research and Public Health, vol. 18, no. 21, 11086. https://doi.org/10.3390/ijerph182111086