Automatic Detection and Classification of Oral Cancer from Photographic Images Using Attention Maps and Deep Learning
Keywords:Oral photographic images, Transfer learning, Deep Learning, Convolutional Neural networks (CNN), Attention maps
Deep learning – Convolutional Neural Networks (DL-CNN) has shown a lot of potential in identifying cancerous and non-cancerous oral lesions from oral photographic images. Moreover, the accuracy of CNNs can be improved by guiding the model to concentrate on the cancerous areas rather than not so important surrounding areas. The paper proposes to develop a DL-CNN model that directly focusses on cancerous areas in the lip, tongue and cheek images. The proposed CNN model applies transfer learning using DenseNet201 as base model for the detection. The model works by identifying region of interests (RoIs) and generating attention maps for the images that helps the model in understanding the area of focus by highlighting it and for classifying the oral lesions correctly. The results demonstrated effectiveness of the approach by giving an accuracy of 84.7% for the classification of oral cancer lesions.
Abati, S.; Bramati, C.; Bondi, S.; Lissoni, A.; Trimarchi, M., “Oral Cancer and Precancer: A Narrative Review on the Relevance of Early Diagnosis,” Int. J. Environ. Res. Public Health 2020, 17, 9160. https://doi.org/10.3390/ijerph17249160
Jeyaraj, Pandia Rajan, and Edward Rajan Samuel Nadar. “Computer-assisted medical image classification for early diagnosis of oral cancer employing deep learning algorithm,” Journal of cancer research and clinical oncology 145 (2019): 829-837.
Fu, Qiuyun, et al. “A deep learning algorithm for detection of oral cavity squamous cell carcinoma from photographic images: A retrospective study,” EClinical Medicine 27 (2020): 100558.
Mortazavi, Hamed, Maryam Baharvand, and Masoumeh Mehdipour. “Oral potentially malignant disorders: an overview of more than 20 entities,” Journal of dental research, dental clinics, dental prospects 8.1 (2014): 6.
Warnakulasuriya, Saman. “Oral potentially malignant disorders: A comprehensive review on clinical aspects and management.,” Oral oncology 102 (2020): 104550.
Palaskar, Rutwik, et al. “Transfer learning for oral cancer detection using microscopic images,” arXiv preprint arXiv:2011.11610 (2020).
Roshan Alex Welikala, Paolo Remagnino, Jian Han Lim, Chee Seng Chan, Senthilmani Rajendran, Thomas George Kallarakkal, Rosnah Binti Zain, Ruwan Duminda Jayasinghe, “Automated Detection and Classification of Oral Lesions Using Deep Learning for Early Detection of Oral Cancer,” IEEE Access, vol. 8, pp. 132677-132693, 2020. https://doi.org/10.1109/access.2020.3010180.
Mohammed Zubair M Shamim, Sadatullah Syed, Mohammad Shiblee, Mohammed Usman, Syed Jaffar Ali, Hany S Hussein, Mohammed Farrag, “Automated Detection of Oral Pre-Cancerous Tongue Lesions Using Deep Learning for Early Diagnosis of Oral Cavity Cancer,” The Computer Journal, Volume 65, Issue 1, January 2022, Pages 91–104, https://doi.org/10.1093/comjnl/bxaa136.
Figueroa, Kevin Chew, et al. “Interpretable deep learning approach for oral cancer classification using guided attention inference network,” Journal of biomedical optics 27.1 (2022): 015001-015001.
Lin, Huiping, et al. “Automatic detection of oral cancer in smartphone-based images using deep learning for early diagnosis,” Journal of Biomedical Optics 26.8 (2021): 086007-086007.
Ding, Baichen, et al. “Detection of dental caries in oral photographs taken by mobile phones based on the YOLOv3 algorithm,” Annals of Translational Medicine 9.21 (2021).
Song et al., “Automatic classification of dual-modality, smartphone-based oral dysplasia and malignancy images using deep learning,” Biomed. Opt. Express, vol. 9, no. 11, pp. 5318–5329, 2018.
Tanriver, G.; Soluk Tekkesin, M.; Ergen, O. “Automated Detection and Classification of Oral Lesions Using Deep Learning to Detect Oral Potentially Malignant Disorders,” Cancers 2021, 13, 2766. https://doi.org/10.3390/cancers13112766
Jubair F, Al-karadsheh O, Malamos D, Al Mahdi S, Saad Y, Hassona Y., “A novel lightweight deep convolutional neural network for early detection of oral cancer,” Oral Diseases. (2021) doi: 10.1111/odi.13825.
Nanditha, B. R., et al. “An ensemble deep neural network approach for oral cancer screening,” (2021): 121-134.
Begum, Sayyada Hajera, and P. Vidyullatha. “Deep Learning Model for Automatic Detection of Oral squamous cell carcinoma (OSCC) using Histopathological Images,” International Journal of Computing and Digital Systems, vol.13, no.1, (Apr-23)
Chan, Chih-Hung, et al. “Texture-map-based branch-collaborative network for oral cancer detection,” IEEE transactions on biomedical circuits and systems 13.4 (2019): 766-780.
M. A. Muqeet, A. B. Mohammad, P. G. Krishna, S. H. Begum, S. Qadeer and N. Begum, “Automated Oral Cancer Detection using Deep Learning-based Technique,” 2022 8th International Conference on Signal Processing and Communication (ICSC), Noida, India, 2022, pp. 294-297, doi: 10.1109/ICSC56524.2022.10009448.
Begum, Sayyada Hajera, et al. “A Lightweight Deep Learning Model for Automatic Diagnosis of Lung Cancer,” 2022 IEEE 2nd International Conference on Mobile Networks and Wireless Communications (ICMNWC). IEEE, 2022.
Martín Abadi, et al., TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorflow.org.
Krizhevsky, I. Sutskever, G.E. Hinton, “ImageNet classification with deep convolutional neural networks,” Advances in Neural Information Processing Systems, 2012, pp. 1097–1105.
M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, L.-C. Chen, “MobileNetV2: Inverted residuals and linear bottlenecks,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 4510–4520.
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, Z. Wojna, “Rethinking the inception architecture for computer vision,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, 2818–2826.
M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, L.-C. Chen, “MobileNetV2: Inverted residuals and linear bottlenecks,” in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 4510–4520.
Huang, G., Liu, Z., Van Der Maaten, L. and Weinberger, K.Q., “Densely Connected Convolutional Networks,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, 21-26, July 2017, 4700-4708.
Kumar, B. P. ., Reddy, N. H. ., Das, R. R. ., Dey, D. ., Sona, D. R. ., & M, A. . (2023). Encryption and Decryption of Images with Pixel Data Modification Using Hand Gesture Passcodes. International Journal on Recent and Innovation Trends in Computing and Communication, 11(4), 118–125. https://doi.org/10.17762/ijritcc.v11i4.6394
Mwangi, J., Cohen, D., Costa, R., Min-ji, K., & Suzuki, H. Optimizing Neural Network Architecture for Time Series Forecasting. Kuwait Journal of Machine Learning, 1(3). Retrieved from http://kuwaitjournals.com/index.php/kjml/article/view/132
Sharma, R., & Dhabliya, D. (2019). A review of automatic irrigation system through IoT. International Journal of Control and Automation, 12(6 Special Issue), 24-29. Retrieved from www.scopus.com
How to Cite
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.