Bee vs Wasp Classification Using Advanced Deep Learning Techniques: CNN, VGG 16
Keywords:
CNN, VGG16, VGG19, ResNet34, Mobile-NetAbstract
This paper explores the use of Convolutional Neural Networks (CNNs) and ResNet-34 architecture for grasshopper and grasshopper classification and discrimination from image datasets Using the deep learning capabilities of CNNs and the rest of ResNet-34 learning a, we address image recognition challenges in biological monitoring. Various data sets of bee and wasp images were used to train and validate the ResNet-34 model. The model performed better than traditional methods and achieved high accuracy in discriminating between two groups of insects. This study demonstrates the potential of CNN and ResNet-34 to automatically identify insects, supporting biodiversity research and conservation efforts.
Downloads
References
T. Bhuiyan, R. M. Carney, and S. Chellappan, “Artificial intelligence versus natural selection: Using computer vision techniques to classify bees and bee mimics,” iScience, vol. 25, no. 9, Sep. 2022, doi: 10.1016/j.isci.2022.104924.
P. Chatelain, M. Elias, C. Fontaine, C. Villemant, I. Dajoz, and A. Perrard, “Müllerian mimicry among bees and wasps: a review of current knowledge and future avenues of research,” Biological Reviews, vol. 98, no. 4, pp. 1310–1328, Aug. 2023, doi: 10.1111/brv.12955.
C. Darren and P. Mendoza, “Black Soldier Fly or Wasp: An Instance Segmentation using Mask R-CNN”, doi: 10.13140/RG.2.2.13110.78401.
A. Orlowska et al., “Honey Bee Queen Presence Detection from Audio Field Recordings using Summarized Spectrogram and Convolutional Neural Networks,” pp. 83–92, 2021, doi: 10.1007/978-3.
T. T. Høye et al., “Deep learning and computer vision will transform entomology,” vol. 118, 2021, doi: 10.1073/pnas.2002545117/-/DCSupplemental.
M. S. Jeon et al., “Deep Learning-Based Portable Image Analysis System for Real-Time Detection of Vespa velutina,” Applied Sciences (Switzerland), vol. 13, no. 13, Jul. 2023, doi: 10.3390/app13137414.
A. Robles-Guerrero, T. Saucedo-Anaya, C. A. Guerrero-Mendez, S. Gómez-Jiménez, and D. J. Navarro-Solís, “Comparative Study of Machine Learning Models for Bee Colony Acoustic Pattern Classification on Low Computational Resources,” Sensors, vol. 23, no. 1, Jan. 2023, doi: 10.3390/s23010460.
J. Duan, J. Cheng, and Y. Cheng, “A Research of Vespa Mandarinia through Visualization Technology and Convolution Neural Network,” in Journal of Physics: Conference Series, IOP Publishing Ltd, Jun. 2021. doi: 10.1088/1742-6596/1952/2/022066.
S. Wu, “Frontiers in Computing and Intelligent Systems Behavior Prediction of Vespa mandarinia based on Convolutional Neural Networks”.
M. B. Jagadeeshan et al., “A Comprehensive Survey On Vision-Based Insect Species Identification and Classification”, doi: 10.13140/RG.2.2.10083.50720/1.
Y. Gao et al., “Application of machine learning in automatic image identification of insects - a review,” Ecological Informatics, vol. 80. Elsevier B.V., May 01, 2024. doi: 10.1016/j.ecoinf.2024.102539.
L. Alzubaidi et al., “Review of deep learning: concepts, CNN architectures, challenges, applications, future directions,” J Big Data, vol. 8, no. 1, Dec. 2021, doi: 10.1186/s40537-021-00444-8.
T. A. O’Shea-Wheller, A. Corbett, J. L. Osborne, M. Recker, and P. J. Kennedy, “VespAI: a deep learning-based system for the detection of invasive hornets,” Commun Biol, vol. 7, no. 1, Dec. 2024, doi: 10.1038/s42003-024-05979-z.
X. Hu, C. Liu, and S. Lin, “DY-RetinaNet Based Identification of Common Species at Beehive Nest Gates,” Symmetry (Basel), vol. 14, no. 6, Jun. 2022, doi: 10.3390/sym14061157.
N. J. Rappa, M. Staab, L. S. Ruppert, J. Frey, J. Bauhus, and A. M. Klein, “Structural elements enhanced by retention forestry promote forest and non-forest specialist bees and wasps,” For Ecol Manage, vol. 529, Feb. 2023, doi: 10.1016/j.foreco.2022.120709.
J. N. Mogan, C. P. Lee, K. M. Lim, and K. S. Muthu, “VGG16-MLP: Gait Recognition with Fine-Tuned VGG-16 and Multilayer Perceptron,” Applied Sciences (Switzerland), vol. 12, no. 15, Aug. 2022, doi: 10.3390/app12157639..
N. Abou Baker, N. Zengeler, and U. Handmann, “A Transfer Learning Evaluation of Deep Neural Networks for Image Classification,” Mach Learn Knowl Extr, vol. 4, no. 1, pp. 22–41, Mar. 2022, doi: 10.3390/make4010002.
M. Humayun, R. Sujatha, S. N. Almuayqil, and N. Z. Jhanjhi, “A Transfer Learning Approach with a Convolutional Neural Network for the Classification of Lung Carcinoma,” Healthcare (Switzerland), vol. 10, no. 6, Jun. 2022, doi: 10.3390/healthcare10061058.
A. Younis, L. Qiang, C. O. Nyatega, M. J. Adamu, and H. B. Kawuwa, “Brain Tumor Analysis Using Deep Learning and VGG-16 Ensembling Learning Approaches,” Applied Sciences (Switzerland), vol. 12, no. 14, Jul. 2022, doi: 10.3390/app12147282.
H. Yang, J. Ni, J. Gao, Z. Han, and T. Luan, “A novel method for peanut variety identification and classification by Improved VGG16,” Sci Rep, vol. 11, no. 1, Dec. 2021, doi: 10.1038/s41598-021-95240-y.
Z. Khan et al., “Diabetic Retinopathy Detection Using VGG-NIN a Deep Learning Architecture,” IEEE Access, vol. 9, pp. 61408–61416, 2021, doi: 10.1109/ACCESS.2021.3074422.
J. Gupta, S. Pathak, and G. Kumar, “Deep Learning (CNN) and Transfer Learning: A Review,” in Journal of Physics: Conference Series, Institute of Physics, 2022. doi: 10.1088/1742-6596/2273/1/012029.
S. Tammina, “Transfer learning using VGG-16 with Deep Convolutional Neural Network for Classifying Images,” International Journal of Scientific and Research Publications (IJSRP), vol. 9, no. 10, p. p9420, Oct. 2019, doi: 10.29322/ijsrp.9.10.2019.p9420.
L. Huang, R. Luo, X. Liu, and X. Hao, “Spectral imaging with deep learning,” Light: Science and Applications, vol. 11, no. 1. Springer Nature, Dec. 01, 2022. doi: 10.1038/s41377-022-00743-6.
R. A. Pugliesi, “Deep Learning Models for Classification of Pediatric Chest X-ray Images using VGG-16 and ResNet-50.” [Online]. Available: https://orcid.org/0000-0001-5108-2104
L. Gao, X. Zhang, T. Yang, B. Wang, and J. Li, “The Application of ResNet-34 Model Integrating Transfer Learning in the Recognition and Classification of Overseas Chinese Frescoes,” Electronics (Switzerland), vol. 12, no. 17, Sep. 2023, doi: 10.3390/electronics12173677.
P. N. Srinivasu, J. G. Sivasai, M. F. Ijaz, A. K. Bhoi, W. Kim, and J. J. Kang, “Classification of skin disease using deep learning neural networks with mobilenet v2 and lstm,” Sensors, vol. 21, no. 8, Apr. 2021, doi: 10.3390/s21082852.
X. Pan et al., “Deep learning for drug repurposing: methods, databases, and applications.”
N. Abe, Institute of Electrical and Electronics Engineers, and IEEE Computer Society, 2018 IEEE International Conference on Big Data : proceedings : Dec 10 - Dec 13, 2018, Seattle, WA, USA.
D. S. Assunção, L. A. Digiampietri, M. Francoy, and H. H. Bíscaro, “Graphical Abstract A Fully Automatic Classification of Bee Species using CNN with Data Augmentation and Transfer Learning Techniques A Fully Automatic Classification of Bee Species using CNN with Data Augmentation and Transfer Learning Techniques A Fully Automatic Classification of Bee Species using CNN with Data Augmentation and Transfer Learning Techniques.” [Online]. Available: https://ssrn.com/abstract=4658136
Downloads
Published
How to Cite
Issue
Section
License
![Creative Commons License](http://i.creativecommons.org/l/by-sa/4.0/88x31.png)
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.