Enhanced Lung Segmentation from Chest X-Ray Images using Attention Based FCNN
Keywords:
X-ray, FCNN, attention U-Net, image segmentationAbstract
Chest Radiographs are extensively used imaging tool for retrieving visual features of the affected area. Detection of abnormalities visually from chest radiographs is a very challenging task for medical practitioners as the thoracic cavity comprises many sensitive organs like the lungs, heart, sternum, etc. The technological advancements in computational technology have facilitated medical experts to improve diagnosis accuracy. Recently deep learning (DL)based architectures have gained popularity among radiologists for better diagnosis. In this research article, an attention based FCNN model is presented for segmenting lungs from Chest radiographs. The proposed model eliminates the computation overhead by eliminating the irrelevant features generated during feature extraction by including the attention mechanism in the decoder architecture of the proposed FCNN. This further enhance the model’s performance and computational complexity. The performance of the proposed model is evaluated on chest radiographs obtained from JSRT dataset and measured on different evaluation metrics likewise precision, Recall, F1-score, accuracy, and the Jaccard Similarity coefficient (JSC). The proposed model has obtained an 98% accuracy during the training and ~97% accuracy during the testing stages. Furthermore, the comparison of proposed model with baseline U-Net is performed.
Downloads
References
M. O. Wielpütz, C. P. Heußel, F. J. F. Herth, and H.-U. Kauczor, “Radiological Diagnosis in Lung Disease,” Dtsch Arztebl Int, Mar. 2014, doi: 10.3238/arztebl.2014.0181.
R. Ali, A. Hussain, and M. Man, “Feature Extraction and Classification for Multiple Species of Gyrodactylus Ectoparasite ,” TELKOMNIKA Indonesian Journal of Electrical Engineering, vol. 13, no. 3, pp. 503–511, Mar. 2015.
Qureshi et al., “Medical image segmentation using deep semantic-based methods: A review of techniques, applications and emerging trends,” Information Fusion, vol. 90, pp. 316–352, Feb. 2023, doi: 10.1016/j.inffus.2022.09.031.
J. John and M. G. Mini, “Multilevel Thresholding Based Segmentation and Feature Extraction for Pulmonary Nodule Detection,” Procedia Technology, vol. 24, pp. 957–963, 2016, doi: https://doi.org/10.1016/j.protcy.2016.05.209.
D. Barbosa, T. Dietenbeck, J. Schaerer, J. D’hooge, D. Friboulet, and O. Bernard, “B-Spline Explicit Active Surfaces: An Efficient Framework for Real-Time 3-D Region-Based Segmentation,” IEEE Transactions on Image Processing, vol. 21, no. 1, pp. 241–251, 2012, doi: 10.1109/TIP.2011.2161484.
B. N. Li, C. K. Chui, S. Chang, and S. H. Ong, “Integrating spatial fuzzy clustering with level set methods for automated medical image segmentation,” Comput Biol Med, vol. 41, no. 1, pp. 1–10, 2011, doi: https://doi.org/10.1016/j.compbiomed.2010.10.007.
Md. Z. Islam, Md. M. Islam, and A. Asraf, “A combined deep CNN-LSTM network for the detection of novel coronavirus (COVID-19) using X-ray images,” Inform Med Unlocked, vol. 20, p. 100412, 2020, doi: https://doi.org/10.1016/j.imu.2020.100412.
B. Sahiner et al., “Deep learning in medical imaging and radiation therapy,” Med Phys, vol. 46, no. 1, pp. e1–e36, Jan. 2019, doi: 10.1002/mp.13264.
A. Maity, T. R. Nair, S. Mehta, and P. Prakasam, “Automatic lung parenchyma segmentation using a deep convolutional neural network from chest X-rays,” Biomed Signal Process Control, vol. 73, p. 103398, 2022, doi: https://doi.org/10.1016/j.bspc.2021.103398.
X. Zhao et al., “D2A U-Net: Automatic segmentation of COVID-19 CT slices based on dual attention and hybrid dilated convolution,” Comput Biol Med, vol. 135, p. 104526, 2021, doi: https://doi.org/10.1016/j.compbiomed.2021.104526.
Y. Wu, G. Wang, Z. Wang, H. Wang, and Y. Li, “DI-Unet: Dimensional interaction self-attention for medical image segmentation,” Biomed Signal Process Control, vol. 78, p. 103896, 2022, doi: https://doi.org/10.1016/j.bspc.2022.103896.
S. Arvind, J. V Tembhurne, T. Diwan, and P. Sahare, “Improvised light weight deep CNN based U-Net for the semantic segmentation of lungs from chest X-rays,” Results in Engineering, vol. 17, p. 100929, 2023, doi: https://doi.org/10.1016/j.rineng.2023.100929.
S. Tyagi and S. N. Talbar, “CSE-GAN: A 3D conditional generative adversarial network with concurrent squeeze-and-excitation blocks for lung nodule segmentation,” Comput Biol Med, vol. 147, p. 105781, 2022, doi: https://doi.org/10.1016/j.compbiomed.2022.105781.
J. Shiraishi et al., “Development of a Digital Image Database for Chest Radiographs With and Without a Lung Nodule,” American Journal of Roentgenology, vol. 174, no. 1, pp. 71–74, Jan. 2000, doi: 10.2214/ajr.174.1.1740071.
O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, Eds., Cham: Springer International Publishing, 2015, pp. 234–241.
O. Oktay et al., “Attention U-Net: Learning Where to Look for the Pancreas,” Apr. 2018, Accessed: Apr. 05, 2023. [Online]. Available: http://arxiv.org/abs/1804.03999
Downloads
Published
How to Cite
Issue
Section
License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.