A Proposed CNN-based Hybrid Deep Learning Model for the Segmentation of Multi-organ Functional Tissue Units
Keywords:Multi-Organ Segmentation, Functional Tissue Units, Deep Learning, U-Net Architecture, Convolutional Neural Networks, Image Masking
In order to provide effective techniques for identifying and segmenting organs in high-resolution histology pictures, this research paper introduces a deep learning approach to the HubMAP-Organ Segmentation Competition. The proposed solution combines a Convolutional Neural Network with a U-Net architecture to achieve state-of-the-art performance on the competition’s validation dataset. Extensive experiments have also been performed to analyze the effect of different hyperparameters and preprocessing techniques on the model performance and also the usage of different pre-trained models such as AlexNet, ZFNet, and EfficientNet. Our proposed CNN-based hybrid model claims to achieve a mean Dice Coefficient/Accuracy of 0.84178 on each image in the test set highlighting the effectiveness of our approach and providing insights into organ segmentation in high-resolution histology images, which has important applications in medical diagnosis and research.
Yang Lei, Tonghe Wang, Joseph Harms, Yabo Fu, Xue Dong, Walter J. Curran, Tian Liu, and Xiaofeng Yang. Cbct-based synthetic mri generation for cbct-guided adaptive radiotherapy. In Dan Nguyen, Lei Xing, and Steve Jiang, editors, Artificial Intelligence in Radiation Therapy, pages 154–161. Springer International Publishing, . ISBN 978-3-030-32486-5.
X. Dong, Y. Lei, S. Tian, T. Wang, P. Patel, W. J. Curran, A. B. Jani, T. Liu, and X. Yang. Synthetic mri-aided multi-organ segmentation on male pelvic ct using cycle consistent deep attention network. Radiother Oncol, 141:192–199, 2019. ISSN 0167-8140. doi: 10.1016/j.radonc.2019.09.028.
X. Dong, Y. Lei, T. Wang, M. Thomas, L. Tang, W. J. Curran, T. Liu, and X. Yang. Automatic multiorgan segmentation in thorax ct images using u-net-gan. Med Phys, 46(5):2157–2168, 2019. ISSN 0094-2405. doi: 10.1002/mp.13458.
Y. Lei, X. Dong, Z. Tian, Y. Liu, S. Tian, T. Wang, X. Jiang, P. Patel, A. B. Jani, H. Mao, W. J. Curran, T. Liu, and X. Yang. Ct prostate segmentation based on synthetic mri-aided deep attention fully convolution network. Med Phys, 2019. ISSN 0094-2405. doi: 10.1002/mp.13933.
Y. Lei, X. Dong, T. Wang, K. Higgins, T. Liu, W. J. Curran, H. Mao, J. A. Nye, and X. Yang. Wholebody pet estimation from low count statistics using cycle-consistent generative adversarial networks. Phys Med Biol, 64(21):215017, 2019. ISSN 0031-9155. doi: 10.1088/1361-6560/ab4891.
Y. Lei, S. Tian, X. He, T. Wang, B. Wang, P. Patel, A. B. Jani, H. Mao, W. J. Curran, T. Liu, and X. Yang. Ultrasound prostate segmentation based on multidirectional deeply supervised v-net. Med Phys, 46(7):3194–3206, 2019. ISSN 0094-2405. doi: 10.1002/mp.13577.
Y. Lei, T. Wang, S. Tian, X. Dong, A. B. Jani, D. Schuster, W. J. Curran, P. Patel, T. Liu, and X. Yang. Male pelvic multi-organ segmentation aided by cbct-based synthetic mri. Phys Med Biol, 2019. ISSN 0031-9155. doi: 10.1088/1361-6560/ab63bb.
Bo Wang, Yang Lei, Jiwoong Jason Jeong, Tonghe Wang, Yingzi Liu, Sibo Tian, Pretesh Patel, Xiaojun Jiang, Ashesh B. Jani, Hui Mao, Walter J. Curran, Tian Liu, and Xiaofeng Yang. Automatic MRI prostate segmentation using 3D deeply supervised FCN with concatenated atrous convolution. In Kensaku Mori and Horst K. Hahn, editors, Medical Imaging 2019: Computer-Aided Diagnosis, volume 10950, pages 988 – 995. International Society for Optics and Photonics, SPIE, 2019. doi: 10.1117/12.2512551. URL https://doi.org/10.1117/12.2512551.
Bo Wang, Yang Lei, Tonghe Wang, Xue Dong, Sibo Tian, Xiaojun Jiang, Ashesh B. Jani, Tian Liu, Walter J. Curran, Pretesh Patel, and Xiaofeng Yang. Automated prostate segmentation of volumetric CT images using 3D deeply supervised dilated FCN. In Elsa D. Angelini and Bennett A. Landman, editors, Medical Imaging 2019: Image Processing, volume 10949, pages 708 – 715. International Society for Optics and Photonics, SPIE, 2019. doi: 10.1117/12.2512547. URL https://doi.org/10.1117/12.2512547.
S. H. Sun, C. Bauer, and R. Beichel. Automated 3-d segmentation of lungs with lung cancer in ct data using a novel robust active shape model approach. Ieee Transactions on Medical Imaging, 31 (2):449–460, 2012. ISSN 0278-0062. doi: 10.1109/Tmi.2011.2171357. URL ://WOS: 000300197500025.
A. A. Qazi, V. Pekar, J. Kim, J. Xie, S. L. Breen, and D. A. Jaffray. Auto-segmentation of normal and target structures in head and neck ct images: A feature-driven model-based approach. Medical Physics, 38(11):6160–6170, 2011. ISSN 0094-2405. doi: 10.1118/1.3654160. URL : //WOS:000296534000037.
Xue Feng, Kun Qing, Nicholas J. Tustison, Craig H. Meyer, and Quan Chen. Deep convolutional neural network for segmentation of thoracic organs-at-risk using cropped 3d images. Medical Physics, 46(5):2169–2180, 2019. ISSN 0094-2405. doi: 10.1002/mp.13466. URL https://doi. org/10.1002/mp.13466.
H. C. Shin, M. R. Orton, D. J. Collins, S. J. Doran, and M. O. Leach. Stacked autoencoders for unsupervised feature learning and multiple organ detection in a pilot study using 4d patient data. Ieee Transactions on Pattern Analysis and Machine Intelligence, 35(8):1930–1943, 2013. ISSN 0162-8828. doi: 10.1109/Tpami.2012.277. URL ://WOS:000320381400009.
Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems. Advances in neural information processing systems; 2012. p. 1097–105.
Zeiler MD, Fergus R. Visualizing and understanding convolutional networks. In: European conference on computer vision. Springer; 2014. p. 818–33.
K. Simonyan, A. Zisserman, Very deep convolutional networks for largescale image recognition, arXiv preprint arXiv:1409.1556
Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A. Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2015. p. 1–9.
He K, Zhang X, Ren S, Sun J. Identity mappings in deep residual networks. In: European conference on computer vision. Springer; 2016. p. 630–45.
Huang G, Liu Z, Van Der Maaten L, Weinberger KQ. Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2017. p. 4700–8.
Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2015. p. 3431–40.
Ronneberger O, Fischer P, Brox T, U-net. Convolutional networks for biomedical image segmentation. In: International Conference on Medicalimage computing and computer-assisted intervention. Springer; 2015. p. 234–41.
Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, Van Der Laak JA, Van Ginneken B, S_anchez CI. A survey on deep learning in medical image analysis. Med Image Anal 2017;42:60–88.
J. Bernal, K. Kushibar, D. S. Asfaw, S. Valverde, A. Oliver, R. Martí, X. Lladoo, Deep convolutional neural networks for brain image analysis on magnetic resonance imaging: a review, Artif. Intell. Med..
Guoheng Huang, Junwen Zhu, Jiajian Li, Zhuowei Wang, Lianglun Cheng, Lizhi Liu, Haojiang Li, Jian Zhou. "Channel-Attention U-Net: Channel Attention Mechanism for Semantic Segmentation of Esophagus and Esophageal Cancer", IEEE Access, 2020
Haodong Chen, Niloofar Zendehdel, Ming C. Leu, Zhaozheng Yin. "Multi-Modal Fine-Grained Activity Recognition and Prediction in Assembly", Research Square Platform LLC, 2022
"Engineering Applications of Neural Networks", Springer Science and Business Media LLC, 2017
"Scientific Abstracts and Sessions", Medical Physics, 2018
"Deep Learning for Cancer Diagnosis", Springer Science and Business Media LLC, 2021
Keerthana B. Chigateri, Ajit M. Hebbale. "A steel surface defect detection model using machine learning", Materials Today: Proceedings, 2023
Yashvardhan Jain, Leah L. Godwin, Sripad Joshi, Shriya Mandarapu, Trang Le, Cecilia Lindskog, Emma Lundberg, Katy Börner. "Segmenting functional tissue units across human organs using community-driven development of generalizable machine learning algorithms", Cold Spring Harbor Laboratory, 2023
Yabo Fu, Yang Lei, Tonghe Wang, Walter J. Curran, Tian Liu, Xiaofeng Yang. "A review of deep learning based methods for medical image multi-organ segmentation", Physica Medica, 2021
"Scientific Abstracts and Sessions", Medical Physics, 2020
Chandraprabha, M. ., & Dhanaraj, R. K. . (2023). Adaboost CNN with Horse Herd Optimization Algorithm to Forecast the Rice Crop Yield . International Journal on Recent and Innovation Trends in Computing and Communication, 11(4), 192–203. https://doi.org/10.17762/ijritcc.v11i4.6401
Auma, G., Levi, S., Santos, M., Ji-hoon, P., & Tanaka, A. Predicting Stock Market Trends using Long Short-Term Memory Networks. Kuwait Journal of Machine Learning, 1(3). Retrieved from http://kuwaitjournals.com/index.php/kjml/article/view/136
Pandey, J. K., Veeraiah, V., Talukdar, S. B., Talukdar, V., Rathod, V. M., & Dhabliya, D. (2023). Smart city approaches using machine learning and the IoT. Handbook of research on data-driven mathematical modeling in smart cities (pp. 345-362) doi:10.4018/978-1-6684-6408- 3.ch018 Retrieved from www.scopus.com
How to Cite
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.