FS (CA)2 Net: Feedback Spatial Channel Attention and Context Attribute Extraction System for Skin Lesion Segmentation in Melanoma
Keywords:
Skin Lesion Segmentation, Melanoma, UNet, Deep Learning, Computer VisionAbstract
Melanoma is known as the most common type of disease, which is currently prevalent globally. Early detection of these disorders is crucial to saving lives. Visual inspection of such lesions during medical tests is challenging due to the overlap of lesions. Current advanced research has suggested the importance of Deep Learning in various real-time applications. Moreover, it has emerged as one of the promising methods to achieve cutting-edge outcomes in several medical applications. In this work, we focus on the development of an oriented methodology for skin lesion segmentation to improve the overall segmentation accuracy. The traditional DL-based methods have reported a gradual increase in segmentation accuracy, but computational complexity and degradation in accuracy due to skin hair and borders remain challenging tasks. Therefore, we present a novel DL-based scheme that is based on the UNet architecture. The UNet-based architectures have reported noteworthy performance for medical image segmentation tasks, but spatial and contextual information collection and processing remain tedious for this process. To overcome this issue, we incorporate channel and spatial attention modules along with the feedback process, which helps to improve the skip connection process by retaining the feature map information. The proposed FS (CA)2 Net architecture is evaluated on different datasets, and experimental study shows that the proposed FS (CA)2 Net achieves 0.96, 0.99, and 0.94 dice scores for PH2, ISIC 2017, and HAM10000 datasets, respectively.
Downloads
References
Siegel, R. L., Miller, K. D., Wagle, N. S., & Jemal, A. (2023). Cancer statistics, 2023. CA: a cancer journal for clinicians, 73(1), 17-48.
Dong, Yuying, Liejun Wang, Shuli Cheng, and Yongming Li. "Fac-net: Feedback attention network based on context encoder network for skin lesion segmentation." Sensors 21, no. 15 (2021): 5172.
Leiter, U., Keim, U., & Garbe, C. (2020). Epidemiology of skin cancer: update 2019. Sunlight, Vitamin D and Skin Cancer, 123-139.
Nikitkina, A. I., Bikmulina, P. Y., Gafarova, E. R., Kosheleva, N. V., Efremov, Y. M., Bezrukov, E. A., ... & Timashev, P. S. (2021). Terahertz radiation and the skin: a review. Journal of Biomedical Optics, 26(4), 043005-043005.
World Cancer Research Fund. (2018). Skin cancer statistics.
International Agency for Research on Cancer. Cancer—World Health Organization. 2020
American Cancer Society. Key Statistics for Melanoma Skin Cancer. 2021.
Siegel, R. L., Miller, K. D., Wagle, N. S., & Jemal, A. (2023). Cancer statistics, 2023. CA: a cancer journal for clinicians, 73(1), 17-48.
Rahib, L., Wehner, M. R., Matrisian, L. M., & Nead, K. T. (2021). Estimated projection of US cancer incidence and death to 2040. JAMA Network Open, 4(4), e214708-e214708.
Chase, T., Cham, K. E., & Cham, B. E. (2020). Curaderm, the Long-Awaited Breakthrough for Basal Cell Carcinoma. International Journal of Clinical Medicine, 11(10), 579.
Zaidi, M. R., Fisher, D. E., & Rizos, H. (2020). Biology of melanocytes and primary melanoma. Cutaneous Melanoma, 3-40.
Roberts, D. L. L., Anstey, A. V., Barlow, R. J., Cox, N. H., British Association of Dermatologists, Bishop, J. N., ... & Kirkham, N. (2002). UK guidelines for the management of cutaneous melanoma. British Journal of dermatology, 146(1), 7-17.
Treatment, M. (2018). Health Professional Version.(nd) PDQ Adult Treatment Editorial Board. PDQ Cancer Information Summaries [Internet]. Bethesda (MD): National Cancer Institute (US).
Miller, K. D., Siegel, R. L., Lin, C. C., Mariotto, A. B., Kramer, J. L., Rowland, J. H., ... & Jemal, A. (2016). Cancer treatment and survivorship statistics, 2016. CA: a cancer journal for clinicians, 66(4), 271-289.
Kaur, R., GholamHosseini, H., Sinha, R., & Lindén, M. (2022). Melanoma classification using a novel deep convolutional neural network with dermoscopic images. Sensors, 22(3), 1134.
Thomsen, K.; Iversen, L.; Titlestad, T.L.; Winther, O. Systematic review of machine learning for diagnosis and prognosis in dermatology. J. Dermatol. Treat. 2020, 31, 496–510.
Sharma, V., Mir, R. N., & Singh, C. (2023). Scale-aware CNN for crowd density estimation and crowd behavior analysis. Computers and Electrical Engineering, 106, 108569.
Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In International Conference 30 on Medical image computing and computer-assisted intervention (pp. 234– 241). Springer
Zhou, Z., Siddiquee, M. M. R., Tajbakhsh, N., & Liang, J. (2018). Unet++: A nested u-net architecture for medical image segmentation. In Deep learning in medical image analysis and multimodal learning for clinical decision support (pp. 3–11). Springer.
Wei, Z., Song, H., Chen, L., Li, Q., & Han, G. (2019). Attention-based DenseUnet network with adversarial training for skin lesion segmentation. IEEE Access, 7, 136616-136629.
Alahmadi, M. D. (2022). Multiscale attention U-Net for skin lesion segmentation. IEEE Access, 10, 59145-59154.
Ren, Y., Yu, L., Tian, S., Cheng, J., Guo, Z., & Zhang, Y. (2022). Serial attention network for skin lesion segmentation. Journal of Ambient Intelligence and Humanized Computing, 1-12.
Chen, P., Huang, S., & Yue, Q. (2022). Skin lesion segmentation using recurrent attentional convolutional networks. IEEE Access, 10, 94007-94018.
H. Wu, S. Chen, G. Chen, W. Wang, B. Lei, and Z. Wen, ‘‘FAT-Net: Feature adaptive transformers for automated skin lesion segmentation,’’ Med. Image Anal., vol. 76, Feb. 2022, Art. no. 102327.
Ramadan, R., & Aly, S. (2022). DGCU–Net: A new dual gradient-color deep convolutional neural network for efficient skin lesion segmentation. Biomedical Signal Processing and Control, 77, 103829.
Gu, R., Wang, L., & Zhang, L. (2022). DE-net: a deep edge network with boundary information for automatic skin lesion segmentation. Neurocomputing, 468, 71-84.
Basak, H., Kundu, R., & Sarkar, R. (2022). MFSNet: A multi focus segmentation network for skin lesion segmentation. Pattern Recognition, 128, 108673.
Jha, D., Riegler, M. A., Johansen, D., Halvorsen, P., & Johansen, H. D. (2020). Doubleu-net: A deep convolutional neural network for medical image segmentation. In 2020 IEEE 33rd International Symposium on Computer-Based Medical Systems (CBMS) (pp. 558–564). IEEE
Badrinarayanan, V., Kendall, A., & Cipolla, R. (2017). Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence, 39, 2481–2495
Goyal, M., Oakley, A., Bansal, P., Dancey, D., & Yap, M. H. (2019). Skin lesion segmentation in dermoscopic images with ensemble deep learning methods. IEEE Access, 8, 4171–4181
Hasan, M. K., Dahal, L., Samarakoon, P. N., Tushar, F. I., & Martí, R. (2020). Dsnet: Automatic dermoscopic skin lesion segmentation. Computers in Biology and Medicine, 120, 103738.
Al-Masni, M. A., Al-Antari, M. A., Choi, M.-T., Han, S.-M., & Kim, T.- S. (2018). Skin lesion segmentation in dermoscopy images via deep full resolution convolutional networks. Computer methods and programs in biomedicine, 162 , 221–231.
Öztürk, Ş., & Özkaya, U. (2020). Skin lesion segmentation with improved convolutional neural network. Journal of digital imaging, 33, 958–970
Xie, F., Yang, J., Liu, J., Jiang, Z., Zheng, Y., & Wang, Y. (2020). Skin lesion segmentation using high-resolution convolutional neural network. Computer methods and programs in biomedicine, 186, 105241
Ünver, H. M., & Ayan, E. (2019). Skin lesion segmentation in dermoscopic images with combination of yolo and grabcut algorithm. Diagnostics, 9 , 72.
Bi, L., Kim, J., Ahn, E., Kumar, A., Fulham, M., & Feng, D. (2017). Dermoscopic image segmentation via multistage fully convolutional networks. IEEE Transactions on Biomedical Engineering, 64, 2065–2074
Bi, L., Kim, J., Ahn, E., Kumar, A., Feng, D., & Fulham, M. (2019). Step-wise integration of deep class-specific learning for dermoscopic image segmentation. Pattern recognition, 85, 78–89.
Saha, A., Prasad, P., & Thabit, A. (2020). Leveraging adaptive color augmentation in convolutional neural networks for deep skin lesion segmentation. In 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI) (pp. 2014–2017). IEEE
Abraham, N., & Khan, N. M. (2019). A novel focal tversky loss function with improved attention u-net for lesion segmentation. In 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019) (pp. 683–687). IEEE
Shahin, A. H., Amer, K., & Elattar, M. A. (2019). Deep convolutional encoder-decoders with aggregated multi-resolution skip connections for skin lesion segmentation. In 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019) (pp. 451–454). IEEE
Bissoto, A., Perez, F., Ribeiro, V., Fornaciali, M., Avila, S., & Valle, E. (2018). Deep-learning ensembles for skin-lesion segmentation, analysis, classification: Recod titans at isic challenge 2018. arXiv preprint arXiv:1808.08480
Ibtehaz, N., & Rahman, M. S. (2020). Multiresunet: Rethinking the unet architecture for multimodal biomedical image segmentation. Neural Networks, 121, 74–87
Downloads
Published
How to Cite
Issue
Section
License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.