Browsing by Author "Mohanty, Sachi Nandan"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
- Block-greedy and cnn based underwater image dehazing for novel depth estimation and optimal ambient light
A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä(2021-12-01) Alenezi, Fayadh; Armghan, Ammar; Mohanty, Sachi Nandan; Jhaveri, Rutvij H.; Tiwari, PrayagA lack of adequate consideration of underwater image enhancement gives room for more research into the field. The global background light has not been adequately addressed amid the presence of backscattering. This paper presents a technique based on pixel differences between global and local patches in scene depth estimation. The pixel variance is based on green and red, green and blue, and red and blue channels besides the absolute mean intensity functions. The global background light is extracted based on a moving average of the impact of suspended light and the brightest pixels within the image color channels. We introduce the block-greedy algorithm in a novel Convolutional Neural Network (CNN) proposed to normalize different color channels’ attenuation ratios and select regions with the lowest variance. We address the discontinuity associated with underwater images by transforming both local and global pixel values. We minimize energy in the proposed CNN via a novel Markov random field to smooth edges and improve the final underwater image features. A comparison of the performance of the proposed technique against existing state-of-the-art algorithms using entropy, Underwater Color Image Quality Evaluation (UCIQE), Underwater Image Quality Measure (UIQM), Underwater Image Colorfulness Measure (UICM), and Underwater Image Sharpness Measure (UISM) indicate better performance of the proposed approach in terms of average and consistency. As it concerns to averagely, UICM has higher values in the technique than the reference methods, which explainsits higher color balance. The µ values of UCIQE, UISM, and UICM of the proposed method supersede those of the existing techniques. The proposed noted a percent improvement of 0.4%, 4.8%, 9.7%, 5.1% and 7.2% in entropy, UCIQE, UIQM, UICM and UISM respectively compared to the best existing techniques. Consequently, dehazed images have sharp, colorful, and clear features in most images when compared to those resulting from the existing state-of-the-art methods. Stable σ values explain the consistency in visual analysis in terms of sharpness of color and clarity of features in most of the proposed image results when compared with reference methods. Our own assessment shows that only weakness of the proposed technique is that it only applies to underwater images. Future research could seek to establish edge strengthening without color saturation enhancement. - Covid-transformer: Interpretable covid-19 detection using vision transformer for healthcare
A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä(2021-11-01) Shome, Debaditya; Kar, T.; Mohanty, Sachi Nandan; Tiwari, Prayag; Muhammad, Khan; Altameem, Abdullah; Zhang, Yazhou; Saudagar, Abdul Khader JilaniIn the recent pandemic, accurate and rapid testing of patients remained a critical task in the diagnosis and control of COVID-19 disease spread in the healthcare industry. Because of the sudden increase in cases, most countries have faced scarcity and a low rate of testing. Chest X-rays have been shown in the literature to be a potential source of testing for COVID-19 patients, but manually checking X-ray reports is time-consuming and error-prone. Considering these limitations and the advancements in data science, we proposed a Vision Transformer-based deep learning pipeline for COVID-19 detection from chest X-ray-based imaging. Due to the lack of large data sets, we collected data from three open-source data sets of chest X-ray images and aggregated them to form a 30 K image data set, which is the largest publicly available collection of chest X-ray images in this domain to our knowledge. Our proposed transformer model effectively differentiates COVID-19 from normal chest X-rays with an accuracy of 98% along with an AUC score of 99% in the binary classification task. It distinguishes COVID-19, normal, and pneumonia patient’s X-rays with an accuracy of 92% and AUC score of 98% in the Multi-class classification task. For evaluation on our data set, we fine-tuned some of the widely used models in literature, namely, EfficientNetB0, InceptionV3, Resnet50, MobileNetV3, Xception, and DenseNet-121, as baselines. Our proposed transformer model outperformed them in terms of all metrics. In addition, a Grad-CAM based visualization is created which makes our approach interpretable by radiologists and can be used to monitor the progression of the disease in the affected lungs, assisting healthcare. - A smart ontology-based IoT framework for remote patient monitoring
A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä(2021-07) Sharma, Nonita; Mangla, Monika; Mohanty, Sachi Nandan; Gupta, Deepak; Tiwari, Prayag; Shorfuzzaman, Mohammad; Rawashdeh, MajdiThe Internet of Things (IoT) is the most promising technology in health technology systems. IoT-based systems ensure continuous monitoring in indoor and outdoor settings. Remote monitoring has revolutionized healthcare by connecting remote and hard-to-reach regions. Specifically, during this COVID-19 pandemic, it is imperative to have a remote monitoring system to assess patients remotely and curb its spread prematurely. This paper proposes a framework that provides the updated information of the Corona Patients in the vicinity and thus provides identifiable data for remote monitoring of locality cohorts. The proposed model is IoT-based remote access and an alarm-enabled bio wearable sensor system for early detection of COVID-19 based on ontology method using sensory 1D Biomedical Signals such as ECG, PPG, temperature, and accelerometer. The proposed ontology-based remote monitoring system analyzes the challenges of encompassing security and privacy issues. The proposed model is also simulated using cooza simulator. During the simulation, it is observed that the proposed model achieves an accuracy of 96.33 %, which establishes the efficacy of the proposed model. The effectiveness of the proposed model is also strengthened by efficient power consumption.