Transfer Learning for Animal Species Identification from CCTV Image: Case Study Zakouma National Park

Authors

  • Oumar Hassan Djibrine Department of Computer Science, Université Virtuelle du Tchad
  • Daouda Ahmat Department of Computer Science, Université Virtuelle du Tchad
  • Moussa Mahamat Boukar Department of Computer Science, Nile University, Abuja, Nigeria.
  • Usman Abubakar Bello Department of Computer Science, Baze University, Abuja, Nigeria.
  • Azza Youssouf Ali Department of Computer Science, Université Virtuelle du Tchad

Keywords:

CCTV, species, image classification, deep learning, machine learning, Convolution Neural Network, image augmentation

Abstract

The precise identification of animal species is of utmost importance in comprehending biodiversity, conducting surveillance of endangered species, and examining the potential ramifications of climate change on the geographical dispersion of species in specific areas.  Closed-circuit television (CCTV) cameras represent a form of passive surveillance technology that generates a substantial volume of ecological imagery.  The manual evaluation of extensive datasets is a labor-intensive, time-consuming, and costly process, underscoring the importance of automated ecological analysis.  In the realm of computer vision, there has been significant progress in addressing issues related to object and species recognition through the development of deep learning networks. These networks have demonstrated state-of-the-art performance, showcasing their effectiveness in this domain. In this study, we have undertaken the development and evaluation of machine learning models with the purpose of classifying animal groups based on camera trap images. Transfer learning was employed in our study to conduct experiments with VGG19, GoogLeNet (InceptionV3), ResNet50, and DenseNet121. CNN-1 achieved an accuracy of 53% in multiclassification, whereas GoogLeNet demonstrated a higher accuracy of 87%. Similarly, ResNet50 earned an accuracy of 83%, DenseNet reached 81%, and VGG achieved 79%. The findings of this study indicate that the utilization of transfer learning yields superior performance compared to that of the self-trained model. The models exhibited encouraging results in terms of species detection and classification within Zakouma National Park.

Downloads

Download data is not yet available.

References

C. J. Torney et al., “A comparison of deep learning and citizen science techniques for counting wildlife in aerial survey images,” Methods in Ecology and Evolution, vol. 10, no. 6, pp. 779–787, Mar. 2019, doi: https://doi.org/10.1111/2041-210x.13165.

Z. He et al., “Visual Informatics Tools for Supporting Large-Scale Collaborative Wildlife Monitoring with Citizen Scientists,” IEEE Circuits and Systems Magazine, vol. 16, no. 1, pp. 73–86, 2016, doi: https://doi.org/10.1109/MCAS.2015.2510200.

[3]M. Bessone et al., “Drawn out of the shadows: Surveying secretive forest species with camera trap distance sampling,” Journal of Applied Ecology, vol. 57, no. 5, pp. 963–974, Mar. 2020, doi: https://doi.org/10.1111/1365-2664.13602.

S. Islam, D. Valles, M. Forstner, and W. Stapleton, “HERPETOFAUNA SPECIES CLASSIFICATION FROM CAMERA TRAP IMAGES USING DEEP NEURAL NETWORK FOR CONSERVATION MONITORING,” 2020. Accessed: Sep. 23, 2023. [Online]. Available: https://digital.library.txst.edu/server/api/core/bitstreams/a8660318-cc76-4a72-bbc6-1cbef6ecfaee/content

D. J. Welbourne, A. W. Claridge, D. J. Paull, and F. Ford, “Improving Terrestrial Squamate Surveys with Camera-Trap Programming and Hardware Modifications,” Animals, vol. 9, no. 6, p. 388, Jun. 2019, doi: https://doi.org/10.3390/ani9060388.

R. Travel, “ZAKOUMA NATIONAL PARK,” Medium, Jun. 21, 2023. https://medium.com/@responsible_travel/zakouma-national-park-1aeb6cf50548 (accessed Sep. 22, 2023).

mjsnairobdev, “ADVENTURE TO ZAKOUMA NATIONAL PARK – ‘AN ABUNDANT AND UNTAMED AFRICAN WILDERNESS’- WITH ORIGINS SAFARIS,” Origins Safaris, Nov. 28, 2019. https://originsafaris.com/adventure-to-zakouma-national-park-with-origins-safaris/

“Zakouma Biodiversity Conservation,” www.africanparks.org. https://www.africanparks.org/the-parks/zakouma/biodiversity-conservation#:~:text=Buffalo%20numbers%20have%20increased%20exponentially (accessed Sep. 22, 2023).

“Zakouma National Park Five-year Business Plan,” 2018. Accessed: Sep. 22, 2023. [Online]. Available: https://rris.biopama.org/sites/default/files/2019-03/zakouma_business_plan2018-2022.pdf

J. F. Moore et al., “The potential and practice of arboreal camera trapping,” Methods in Ecology and Evolution, vol. 12, no. 10, pp. 1768–1779, Jul. 2021, doi: https://doi.org/10.1111/2041-210x.13666.

Z. He et al., “Visual Informatics Tools for Supporting Large-Scale Collaborative Wildlife Monitoring with Citizen Scientists,” IEEE Circuits and Systems Magazine, vol. 16, no. 1, pp. 73–86, 2016, doi: https://doi.org/10.1109/MCAS.2015.2510200.

R. van Klink et al., “Emerging technologies revolutionise insect ecology and monitoring,” Trends in Ecology & Evolution, vol. 37, no. 10, pp. 872–885, Oct. 2022, doi: https://doi.org/10.1016/j.tree.2022.06.001.

X. Yu, J. Wang, R. Kays, P. A. Jansen, T. Wang, and T. Huang, “Automated identification of animal species in camera trap images,” EURASIP Journal on Image and Video Processing, vol. 2013, no. 1, Sep. 2013, doi: https://doi.org/10.1186/1687-5281-2013-52.

J. L. Dinerman, C. J. Lowenstein, and S. H. Snyder, “Molecular mechanisms of nitric oxide regulation. Potential relevance to cardiovascular disease.,” Circulation Research, vol. 73, no. 2, pp. 217–222, Aug. 1993, doi: https://doi.org/10.1161/01.res.73.2.217.

A. Gomez Villa, A. Salazar, and F. Vargas, “Towards automatic wild animal monitoring: Identification of animal species in camera-trap images using very deep convolutional neural networks,” Ecological Informatics, vol. 41, pp. 24–32, Sep. 2017, doi: https://doi.org/10.1016/j.ecoinf.2017.07.004.

Federica Di Michele et al., “Comparison of machine learning tools for damage classification: the case of L’Aquila 2009 earthquake,” Natural Hazards, vol. 116, no. 3, pp. 3521–3546, Jan. 2023, doi: https://doi.org/10.1007/s11069-023-05822-4.

A. Mathis, S. Schneider, J. Lauer, and M. W. Mathis, “A Primer on Motion Capture with Deep Learning: Principles, Pitfalls, and Perspectives,” Neuron, vol. 108, no. 1, pp. 44–65, Oct. 2020, doi: https://doi.org/10.1016/j.neuron.2020.09.017.

E. Sadeqi Azer, M. Haghir Ebrahimabadi, S. Malikić, R. Khardon, and S. C. Sahinalp, “Tumor Phylogeny Topology Inference via Deep Learning,” iScience, vol. 23, no. 11, p. 101655, Nov. 2020, doi: https://doi.org/10.1016/j.isci.2020.101655.

K. W. Ahmed, O. Chanda, N. Mohammed, and Y. Wang, “Obfuscated image classification for secure image-centric friend recommendation,” Sustainable Cities and Society, vol. 41, pp. 940–948, Aug. 2018, doi: https://doi.org/10.1016/j.scs.2017.10.001.

D. Liu, “A Practical Guide to ReLU,” Medium, Nov. 30, 2017. Available: https://medium.com/@danqing/a-practical-guide-to-relu-b83ca804f1f7

R. Yamashita, M. Nishio, R. K. G. Do, and K. Togashi, “Convolutional neural networks: an overview and application in radiology,” Insights into Imaging, vol. 9, no. 4, pp. 611–629, Jun. 2018, doi: https://doi.org/10.1007/s13244-018-0639-9.

C. Shorten and T. M. Khoshgoftaar, “A survey on Image Data Augmentation for Deep Learning,” Journal of Big Data, vol. 6, no. 1, Jul. 2019, doi: https://doi.org/10.1186/s40537-019-0197-0.

F. Wang, W. Liu, H. Liu, and J. Cheng, “Additive Margin Softmax for Face Verification,” IEEE Signal Processing Letters, vol. 25, no. 7, pp. 926–930, Jul. 2018, doi: https://doi.org/10.1109/LSP.2018.2822810.

M. Shaha and M. Pawar, “Transfer Learning for Image Classification,” 2018 Second International Conference on Electronics, Communication and Aerospace Technology (ICECA), Mar. 2018, doi: https://doi.org/10.1109/iceca.2018.8474802.

Animesh Seemendra, R. Singh, and S. Singh, “Breast Cancer Classification Using Transfer Learning,” Lecture notes in electrical engineering, pp. 425–436, Nov. 2020, doi: https://doi.org/10.1007/978-981-15-7804-5_32.

K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” arXiv:1409.1556 [cs], Apr. 2015, Available: http://arxiv.org/abs/1409.1556

L. Yang et al., “GoogLeNet based on residual network and attention mechanism identification of rice leaf diseases,” Computers and Electronics in Agriculture, vol. 204, pp. 107543–107543, Jan. 2023, doi: https://doi.org/10.1016/j.compag.2022.107543.

J. Chen, J. Chen, D. Zhang, Y. Sun, and Y. A. Nanehkaran, “Using deep transfer learning for image-based plant disease identification,” Computers and Electronics in Agriculture, vol. 173, p. 105393, Jun. 2020, doi: https://doi.org/10.1016/j.compag.2020.105393.

C. Zhang, P. Benz, A. Karjauv, and I. S. Kweon, “Universal Adversarial Perturbations Through the Lens of Deep Steganography: Towards a Fourier Perspective,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 4, pp. 3296–3304, May 2021, doi: https://doi.org/10.1609/aaai.v35i4.16441.

R. Muhammad, M. Mahamat Boukar, S. Adeshina, and S. Dane, “Deep Learning-Based OCT for Epilepsy: A Review,” Journal of Research in Medical and Dental Science |, vol. 10, no. 8, 2022, Accessed: Sep. 23, 2023. [Online]. Available: https://www.jrmds.in/articles/deep-learningbased-oct-for-epilepsy-a-review.pdf

U. B. Abubakar, M. M. Boukar, S. Adeshina, and S. Dane, “Transfer Learning Model Training Time Comparison for Osteoporosis Classification on Knee Radiograph of RGB and Grayscale Images,” WSEAS TRANSACTIONS ON ELECTRONICS, vol. 13, pp. 45–51, Sep. 2022, doi: https://doi.org/10.37394/232017.2022.13.7.

“Report of the 1995 World Health Organization/International Society and Federation of Cardiology Task Force on the Definition and Classification of Cardiomyopathies,” Circulation, vol. 93, no. 5, pp. 841–842, Mar. 1996, doi: https://doi.org/10.1161/01.cir.93.5.841.

J. Astrup, B. K. Siesjö, and L. Symon, “Thresholds in cerebral ischemia - the ischemic penumbra.,” Stroke, vol. 12, no. 6, pp. 723–725, Nov. 1981, doi: https://doi.org/10.1161/01.str.12.6.723.

Assia Aboubakar Mahamat et al., “Machine Learning Approaches for Prediction of the Compressive Strength of Alkali Activated Termite Mound Soil,” Applied sciences, vol. 11, no. 11, pp. 4754–4754, May 2021, doi: https://doi.org/10.3390/app11114754.

S. Loussaief and A. Abdelkrim, “Machine learning framework for image classification,” 2016 7th International Conference on Sciences of Electronics, Technologies of Information and Telecommunications (SETIT), Dec. 2016, doi: https://doi.org/10.1109/setit.2016.7939841.

W. Rawat and Z. Wang, “Deep Convolutional Neural Networks for Image Classification: A Comprehensive Review,” Neural Computation, vol. 29, no. 9, pp. 2352–2449, Sep. 2017, doi: https://doi.org/10.1162/neco_a_00990.

Q. Bai, “Big Data Research: Database and Computing,” Journal of Big Data Research, vol. 1, no. 1, pp. 1–4, Apr. 2018, doi: https://doi.org/10.14302/issn.2768-0207.jbr-17-1925.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” Communications of the ACM, vol. 60, no. 6, pp. 84–90, May 2012, doi: https://doi.org/10.1145/3065386.

Umar Adam Ibrahim, Moussa Mahamat Boukar, and M. A. Suleiman, “Development of Hausa Acoustic Model for Speech Recognition,” International Journal of Advanced Computer Science and Applications, vol. 13, no. 5, Jan. 2022, doi: https://doi.org/10.14569/ijacsa.2022.0130559.

Aishat Salau, N. Agwu, and Moussa Prof. Moussa, “Deep Learning for Fraud Prediction in Preauthorization for Health Insurance,” International journal of engineering and advanced technology, vol. 12, no. 2, pp. 75–81, Dec. 2022, doi: https://doi.org/10.35940/ijeat.b3915.1212222.

G. George, S. Adeshina, and M. M. Boukar, “Development of Android Application for Facial Age Group Classification Using TensorFlow Lite,” International Journal of Intelligent Systems and Applications in Engineering, vol. 11, no. 4, pp. 11–17, Sep. 2023, Accessed: Sep. 24, 2023. [Online]. Available: https://ijisae.org/index.php/IJISAE/article/view/3449

Badiy, M. ., & Amounas, F. . (2023). Embedding-based Method for the Supervised Link Prediction in Social Networks . International Journal on Recent and Innovation Trends in Computing and Communication, 11(3), 105–116. https://doi.org/10.17762/ijritcc.v11i3.6327

Esposito, M., Kowalska, A., Hansen, A., Rodríguez, M., & Santos, M. Optimizing Resource Allocation in Engineering Management with Machine Learning. Kuwait Journal of Machine Learning, 1(2). Retrieved from http://kuwaitjournals.com/index.php/kjml/article/view/115

Kathole, A. B., Katti, J., Dhabliya, D., Deshpande, V., Rajawat, A. S., Goyal, S. B., Suciu, G. (2022). Energy-aware UAV based on blockchain model using IoE application in 6G network-driven cybertwin. Energies, 15(21) doi:10.3390/en15218304 Kawale, S., Dhabliya, D., & Yenurkar, G. (2022).

Downloads

Published

25.12.2023

How to Cite

Djibrine, O. H. ., Ahmat, D. ., Boukar, M. M. ., Bello, U. A. ., & Ali, A. Y. . (2023). Transfer Learning for Animal Species Identification from CCTV Image: Case Study Zakouma National Park. International Journal of Intelligent Systems and Applications in Engineering, 12(1), 28–40. Retrieved from https://ijisae.org/index.php/IJISAE/article/view/3673

Issue

Section

Research Article

Similar Articles

You may also start an advanced similarity search for this article.