Improving Medical Image Classification Using Ensemble Learning and Deep Convolutional Neural Networks
Keywords:medical imaging, deep learning, convolution neural network, ensemble model
Classification of medical images is essential for helping physicians make correct diagnoses and treatment choices. The performance of Deep Convolutional Neural Networks (CNNs) in a variety of image classification tasks has been exemplary. Single CNN models might not be able to capture all the subtleties inherent in the data due to the intricate and varied nature of medical imaging. In this paper, using the strength of ensemble learning and deep CNNs, we suggest a novel method to improve medical image classification. Our approach entails building an ensemble model out of several deep CNN architectures, each of which was trained using a portion of the medical image dataset. We seek to increase classification accuracy and robustness by using the diversity of these models. The ensemble model combines predictions from different CNNs using methods like bagging and boosting to provide a more thorough and trustworthy classification result. We run comprehensive tests on several medical imaging datasets to verify the efficiency of our suggested approach. Our ensemble learning architecture regularly outperforms single CNN models in terms of accuracy, sensitivity, and specificity. Additionally, we offer details on how the ensemble size, diversity of the constituent models, and other crucial elements affect the performance of the system.
Paredes R, Keysers D, Lehmann TM, Wein B, Ney H, Vidal E. Classification of medical images using local representations. In: Bildverarbeitungfür die Medizin 2002.
Parveen N, Sathik MM. Detection of pneumonia in chest X-ray images. J X-ray Sci Technol. 2011;19(4):423–8.
Caicedo JC, Cruz A, Gonzalez FA. Histopathology image classification using bag of features and kernel functions. In: Conference on artificial intelligence in medicine in Europe. Berlin: Springer; 2009. p. 126–35.
S. Ajani and M. Wanjari, "An Efficient Approach for Clustering Uncertain Data Mining Based on Hash Indexing and Voronoi Clustering," 2013 5th International Conference and Computational Intelligence and Communication Networks, 2013, pp. 486-490, doi: 10.1109/CICN.2013.106.
Khetani, V., Gandhi, Y., Bhattacharya, S., Ajani, S. N., & Limkar, S. (2023). Cross-Domain Analysis of ML and DL: Evaluating their Impact in Diverse Domains. International Journal of Intelligent Systems and Applications in Engineering, 11(7s), 253–262. Retrieved from https://ijisae.org/index.php/IJISAE/article/view/2951
Rublee E, Rabaud V, Konolige K, Bradski GR. Orb: an efficient alternative to sift or surf. Citeseer. 2011; 11:2.
Mueen A, Baba S, Zainuddin R. Multilevel feature extraction and X-ray image classification. J Appl Sci. 2007;7(8):1224–9.
Yuan X, Yang Z, Zouridakis G, Mullani N. Svm-based texture classification and application to early melanoma detection. In: 2006 international conference of the IEEE engineering in medicine and biology society. IEEE. 2006. p. 4775–8.
Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, et al. Imagenet large scale visual recognition challenge. Int J Comput Vision. 2015;115(3):211–52.
Li Q, Cai W, Wang X, Zhou Y, Feng DD, Chen M. Medical image classification with convolutional neural network. In: 2014 13th international conference on control automation robotics & vision (ICARCV). IEEE; 2014. p. 844–8.
Wang X, Peng Y, Lu L, Lu Z, Bagheri M, Summers RM. Chestx-ray8: hospital-scale chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. p. 2097–106
Vianna VP. Study and development of a computer-aided diagnosis system for classification of chest X-ray images using convolutional neural networks pre-trained for imagenet and data augmentation. 2018. arXiv preprint arXiv:1806.00839.
Iesmantas T, Alzbutas R. Convolutional capsule network for classification of breast cancer histology images. In: International conference image analysis and recognition. Berlin: Springer. 2018. p. 853–60.
Beşer F, Kizrak MA, Bolat B, Yildirim T. Recognition of sign language using capsule networks. In: 2018 26th signal processing and communications applications conference (SIU). IEEE. 2018. p. 1–4.
M. Mohammed, H. Mwambi, I.B. Mboya, M.K. Elbashir, B. Omolo, A stacking ensemble deep learning approach to cancer type classification based on TCGA data, Sci. Rep. 11 (2021) 15626.
Y. Yang, Y. Hu, X. Zhang, S. Wang, Two-Stage Selective Ensemble of CNN via Deep Tree Training for Medical Image Classification, IEEE Trans. Cybern. (2021) 1–14.
Z. Hameed, S. Zahia, B. Garcia-Zapirain, J. Javier Aguirre, A. MaríaVanegas, Breast Cancer Histopathology Image Classification Using an Ensemble of Deep Learning Models, Sensors. 20 (2020) 4373.
A. Das, S.K. Mohapatra, M.N. Mohanty, Design of deep ensemble classifier with fuzzy decision method for biomedical image classification, Appl. Soft Comput. 115 (2022) 108178.
C. Ju, A. Bibaut, M. van der Laan, The relative performance of ensemble methods with deep convolutional neural networks for image classification, J. Appl. Stat. 45 (2018) 2800–2818.
O. Sagi, L. Rokach, Ensemble learning: A survey, Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 8 (2018).
I. Kandel, M. Castelli, A. Popovič, Comparing Stacking Ensemble Techniques to Improve Musculoskeletal Fracture Image Classification, J. Imaging. 7 (2021) 100.
J.N. Kather, C.A. Weis, F. Bianconi, S.M. Melchers, L.R. Schad, T. Gaiser, A. Marx, F.G. Zöllner, Multi-class texture analysis in colorectal cancer histology, Sci. Rep. 6 (2016).
J.N. Kather, F.G. Zöllner, F. Bianconi, S.M. Melchers, L.R. Schad, T. Gaiser, A. Marx, C.-A. Weis, Collection of textures in colorectal cancer histology, (2016).
M.E.H. Chowdhury, T. Rahman, A. Khandakar, R. Mazhar, M.A. Kadir, Z. Bin Mahbub, K.R. Islam, M.S. Khan, A. Iqbal, N. Al Emadi, M.B.I. Reaz, M.T. Islam, Can AI Help in Screening Viral and COVID-19 Pneumonia?, IEEE Access. 8 (2020) 132665– 132676.
T. Rahman, A. Khandakar, Y. Qiblawey, A. Tahir, S. Kiranyaz, S. Bin AbulKashem, M.T. Islam, S. Al Maadeed, S.M. Zughaier, M.S. Khan, M.E.H. Chowdhury, Exploring the effect of image enhancement techniques on COVID-19 detection using chest X-ray images, Comput. Biol. Med. 132 (2021) 104319.
Italian Society of Medical and Interventional Radiology, COVID-19 - Medical segmentation, (2020).
J. Ferlay, M. Colombet, I. Soerjomataram, C. Mathers, D.M. Parkin, M. Piñeros, A. Znaor, F. Bray, Estimating the global cancer incidence and mortality in 2018: GLOBOCAN sources and methods, Int. J. Cancer. 144 (2019) 1941–1953
M. Combalia, N.C.F. Codella, V. Rotemberg, B. Helba, V. Vilaplana, O. Reiter, C. Carrera, A. Barreiro, A.C. Halpern, S. Puig, J. Malvehy, BCN20000: Dermoscopic Lesions in the Wild, (2019).
N.C.F. Codella, D. Gutman, M.E. Celebi, B. Helba, M.A. Marchetti, S.W. Dusza, A. Kalloo, K. Liopyris, N. Mishra, H. Kittler, A. Halpern, Skin Lesion Analysis Toward Melanoma Detection: A Challenge at the 2017 International Symposium on Biomedical Imaging (ISBI), Hosted by the International Skin Imaging Collaboration (ISIC), Proc. - Int. Symp. Biomed. Imaging. 2018-April (2017) 168–172.
P. Tschandl, C. Rosendahl, H. Kittler, Data descriptor: The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions, Sci. Data. 5 (2018) 1–9.
Diabetic Retinopathy Detection | Kaggle, (n.d.). https://www.kaggle.com/c/diabetic-retinopathy-detection/overview
J. Cuadros, G. Bresnick, EyePACS: An adaptable telemedicine system for diabetic retinopathy screening, J. Diabetes Sci. Technol. 3 (2009) 509–516.
A. Buslaev, V.I. Iglovikov, E. Khvedchenya, A. Parinov, M. Druzhinin, A.A. Kalinin, Albumentations: Fast and Flexible Image Augmentations, Information. 11 (2020) 125.
F. Chollet, Xception: Deep learning with depthwise separable convolutions, Institute of Electrical and Electronics Engineers Inc., 2017.
Thatikonda, R., Vaddadi, S.A., Arnepalli, P.R.R. et al. Securing biomedical databases based on fuzzy method through blockchain technology. Soft Comput (2023). https://doi.org/10.1007/s00500-023-08355-x
Mane, D., Ashtagi, R., Kumbharkar, P., Kadam, S., Salunkhe, D., Upadhye, G. (2022). An improved transfer learning approach for classification of types of cancer. Traitement du Signal, Vol. 39, No. 6, pp. 2095-2101. https://doi.org/10.18280/ts.390622
Ahire, P. G. ., & Patil, P. D. . (2023). Context-Aware Clustering and the Optimized Whale Optimization Algorithm: An Effective Predictive Model for the Smart Grid. International Journal on Recent and Innovation Trends in Computing and Communication, 11(1), 62–76. https://doi.org/10.17762/ijritcc.v11i1.5987
Andrew Hernandez, Stephen Wright, Yosef Ben-David, Rodrigo Costa, David Botha. Predictive Analytics for Decision-Making: Leveraging Machine Learning Techniques. Kuwait Journal of Machine Learning, 2(3). Retrieved from http://kuwaitjournals.com/index.php/kjml/article/view/193
Dhabliya, D. Security analysis of password schemes using virtual environment (2019) International Journal of Advanced Science and Technology, 28 (20), pp. 1334-1339.
How to Cite
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.