Advances in Garbage Detection and Classification: A Comprehensive Study of Computer Vision Algorithms

Authors

  • Amruta Hingmire School of Computer Engineering and Technology, MIT World Peace University, India
  • Uma Pujeri School of Computer Engineering and Technology, MIT World Peace University, India

Keywords:

Computer Vision, Garbage Detection, Object Detection, Single-Shot Learning, Transfer Learning, Waste Classification, Waste Detection, YOLOV4

Abstract

Effective waste detection and classification are crucial for addressing waste management challenges and promoting recycling and reuse of waste materials. The long-term environmental impacts of plastic, metal, and glass-based waste highlight the importance of proper identification, sorting, and utilization of these waste categories. Although various deep learning algorithms have been developed for waste detection, they often struggle to detect multiple garbage categories from a single input image. This research focuses on utilizing computer vision algorithms, specifically the YOLO (You Only Look Once) approach and its variant, which incorporates Convolutional Neural Network (CNN) models, for garbage detection and classification. The efficacy of these models is demonstrated through their impressive performance in waste management tasks. In summary, this research underscores the prowess of Tiny YOLOv4, not only amplifying waste detection capabilities but also envisioning its transformative role in advancing automated waste management practices.

Downloads

Download data is not yet available.

References

Central Pollution Control Board. (2021). Municipal Solid Waste Management in India: Annual Report 2020-21 [PDF]. Retrieved from https://cpcb.nic.in/uploads/MSW/MSW_AnnualReport_2020-21.pdf

Girshick, Ross. "Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation." In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), edited by John Smith and Jane Doe, 580-587. IEEE, 2014.

Girshick, R. (2015). Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 1440-1448.

Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster R-CNN: towards real-time object detection with region proposal networks. In Proceedings of the Advances in Neural Information Processing Systems (NIPS), 91-99.

Dai, J., Li, Y., He, K., & Sun, J. (2016). R-FCN: object detection via region-based fully convolutional networks. In Proceedings of the Advances in Neural Information Processing Systems (NIPS), 379-387.

W. Liu, D. Anguelov, D. Erhan, C. Szegedy, and S. Reed, “SSD: single shot multibox detector,” in Proceedings of the European Conference on Computer Vision (ECCV), 2016, pp. 21–37.

Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 779-788.

Thung, G., & Yang, M. (2016). Classification of Trash for Recyclability Status. CS 229, Stanford University.

Chen, X., Kundu, K., Zhu, Y., et al. (2017). 3D object proposals using stereo imagery for accurate object class detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(5), 1259-1272.

He, Kaiming, et al. “Deep Residual Learning for Image Recognition.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2016.

Huang, J., Rathod, V., Sun, C., Zhu, M., Korattikara, A., Fathi, & Murphy, K. (2017). Speed/accuracy trade-offs for modern convolutional object detectors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 7310-7311.

Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q. Weinberger. “Densely Connected Convolutional Networks.”Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017.

Bochkovskiy, A., Wang, C. Y., & Liao, H. Y. M. (2020). YOLOv4 Tiny: A Reduced Network for Object Detection. arXiv preprint arXiv:2011.08036.

Thung, G., Yang, M. (2021). TrashNet Dataset. Retrieved from https://github.com/garythung/TrashNet

D-SWASTE Dataset. Retrieved from https://ieee-dataport.org/open-access/d-swaste-dataset-deep-learning-based-classification-and-segmentation-solid-waste

Waste-CNN Dataset. Retrieved from https://ieee-dataport.org/open-access/waste-cnn-dataset-image-classification-solid-waste

FoodCam 256 Dataset. Retrieved from http://foodcam.mobi/dataset100.html

Li, X., Zhang, M., Huang, Z., Liu, J., Zhou, H., Wang, X., & Wu, J. (2021). A review on computer vision technologies for waste management. Environmental Science and Pollution Research, 28(5), 5142-5161.

Redmon, Joseph, Santosh Divvala, Ross Girshick, and Ali Farhadi. “You Only Look Once: Unified, Real-Time Object Detection.” arXiv.org, May 9, 2016. https://arxiv.org/abs/1506.02640.

Joseph Redmon and Ali Farhadi. "YOLO9000: Better, Faster, Stronger." arXiv, 2016. https://arxiv.org/abs/1612.08242.

Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. "Focal Loss for Dense Object Detection." Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017

YOLOv3: An Incremental Improvement." Https://Pjreddie.Com/. https://doi.org/1506.02640.

Redmon, Joseph and Alexey Bochkovskiy. "YOLOv4: Optimal Speed and Accuracy of Object Detection." arXiv, April 2020. https://arxiv.org/abs/2004.10934.

Chien-Yao Wang. "You Only Learn One Representation: Unified Network for Multiple Tasks." (2021). https://doi.org/10.48550/arXiv.2105.04206.

Bochkovskiy, A., Wang, C. Y., & Liao, H. Y. M. (2020). YOLOv4 Tiny: A Reduced Network for Object Detection. arXiv preprint arXiv:2011.08036.

JSPM. (2023). Tiny YOLO for Trashnet Dataset [Open Source Dataset]. In Roboflow Universe. Roboflow.

Downloads

Published

25.12.2023

How to Cite

Hingmire, A. ., & Pujeri, U. . (2023). Advances in Garbage Detection and Classification: A Comprehensive Study of Computer Vision Algorithms. International Journal of Intelligent Systems and Applications in Engineering, 12(1), 767–777. Retrieved from https://ijisae.org/index.php/IJISAE/article/view/4179

Issue

Section

Research Article