Aircraft Detection System Based on Regions with Convolutional Neural Networks
AbstractObject detection in remote sensing imagery is an important topic of image processing researches. Detection of objects and regions from satellite imagery takes its place in various applications such as the detection of residential areas and agricultural lands, road lines, ships, as well as airport and hangar detections. As a more specific remote sensing imagery based object-detection process, a stationary aircraft detection system could serve as a model in some military applications. Such a model could serve the detection of stationary aircraft targets in airports. In the proposed study, a deep learning-based model detects the aircraft in the airports using the satellite images from Google Earth. The deep learning model uses the state of the art Regions with Convolutional Neural Network (RCNN). Firstly a built from scratch CNN design is used for the basic learning step of the system. Then, RCNN performs region detection, which anchors the stationary aircraft object bounding boxes. A large dataset containing aircraft images is preferred for the training of CNN. To validate the system, satellite images captured from airports in Turkey are used. The results of the study show that the proposed model successfully operates aircraft detection with high-performance rates. While the classifier network structure, which constitutes the first step of the study, produces 98.4% test accuracy, the proposed aircraft detection framework has successfully performed the aircraft identification process by producing matched bounding boxes in the test images.
Z. Q. Zhao, P. Zheng, S. T. Xu, and X. Wu, “Object Detection With Deep Learning: A Review,” IEEE Trans. Neural Networks Learn. Syst., vol. PP, pp. 1–21, 2019.
E. DANDIL, “Bilgisayarlı Tomografi Görüntüleri Üzerinde Karaciğer Bölgesinin Bilgisayar Destekli Otomatik Bölütleme Uygulaması,” Gazi Üniversitesi Fen Bilim. Derg. Part C Tasarım ve Teknol., vol. 7, no. 3, pp. 712–728, 2019.
W. Zhang, W. Shihao, T. Sophanyouly, J. Chen, and Y. Qian, “Deconv R-CNN for Small Object Detection on Remote Sensing Images,” in IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium, 2018, pp. 2491–2494.
H. AÇIKGÖZ, İ. POYRAZ, and R. ÇÖTELİ, “IEEE 14-Baralı Güç Sisteminde Gerilim Kararlılığının Uç Öğrenme Makinesi İle Analizi,” Gazi Üniversitesi Fen Bilim. Derg. Part C Tasarım ve Teknol., vol. 7, no. 3, pp. 564–575, 2019.
A. Şeker, B. Diri, and H. H. Balık, “Derin Öğrenme Yöntemleri ve Uygulamaları Hakkında Bir İnceleme,” Gazi Mühendislik Bilim. Derg., vol. 3, no. 3, pp. 47–64, 2017.
Ö. Inik and E. Ülker, “Derin öğrenme ve görüntü analizinde kullanılan derin öğrenme modelleri,” Gaziosmanpaşa Bilim. Araştırma Derg. Gaziosmanpasa J. Sci. Res., vol. 6, no. 3, pp. 85–104, 2017.
A. Gülcü and Z. Kuş, “Konvolüsyonel Sinir Ağlarında Hiper-Parametre Optimizasyonu Yöntemlerinin İncelenmesi,” Gazi Üniversitesi Fen Bilim. Derg. Part C Tasarım ve Teknol., vol. 7, no. 2, pp. 503–522, 2019.
R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2014, pp. 580–587.
J. Li, H. C. Wong, S. L. Lo, and Y. Xin, “Multiple Object Detection by a Deformable Part-Based Model and an R-CNN,” IEEE Signal Process. Lett., vol. 25, no. 2, pp. 288–292, 2018.
R. Girshick, “Fast R-CNN,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 1440–1448.
K. Lenc and A. Vedaldi, “R-CNN minus R,” Arxiv, pp. 5.1-5.12, 2015.
P. Ding, Y. Zhang, W. J. Deng, P. Jia, and A. Kuijper, “A light and faster regional convolutional neural network for object detection in optical remote sensing images,” ISPRS J. Photogramm. Remote Sens., 2018.
V. Murugan, V. R. Vijaykumar, and A. Nidhila, “A Deep Learning RCNN Approach for Vehicle Recognition in Traffic Surveillance System,” in Proceedings of the 2019 IEEE International Conference on Communication and Signal Processing, ICCSP 2019, 2019, pp. 157–160.
X. Bai, C. Wang, and C. Li, “A Streampath-Based RCNN Approach to Ocean Eddy Detection,” IEEE Access, vol. 7, pp. 106336–106345, 2019.
M. Zhu, Y. Xu, S. Ma, S. Li, H. Ma, and Y. Han, “Effective airplane detection in remote sensing images based on multilayer feature fusion and improved nonmaximal suppression algorithm,” Remote Sens., vol. 11, no. 9, 2019.
K. Zhao and X. Ren, “Small Aircraft Detection in Remote Sensing Images Based on YOLOv3,” in IOP Conference Series: Materials Science and Engineering, 2019, vol. 533, p. 012056.
C. Lee, H. J. Kim, and K. W. Oh, “Comparison of faster R-CNN models for object detection,” in International Conference on Control, Automation and Systems, 2016, vol. 0, no. Iccas, pp. 107–110.
H. Chen, J. Zhao, T. Gao, and W. Chen, “Fast airplane detection with hierarchical structure in large scene remote sensing images at high spatial resolution,” in International Geoscience and Remote Sensing Symposium (IGARSS), 2018, vol. 2018-July, pp. 4846–4849.
C. Yıldız and E. Polat, “Uydu Görüntülerinden Durağan Haldeki Uçakların Algılanması,” in 2011 IEEE 19th Signal Processing and Communications Applications Conference, SIU 2011, 2011, pp. 514–517.
Y. Wang, A. Wang, and C. Hu, “A Novel Airplane Detection Algorithm Based on Deep CNN,” in International Conference of Pioneering Computer Scientists, Engineers and Educators ICPCSEE 2018:, 2018, pp. 721–728.
M. R. Mohammadi, “Deep Multiple Instance Learning for Airplane Detection in High Resolution Imagery,” Arxiv, no. September, 2018.
Z. Chen, T. Zhang, and C. Ouyang, “End-to-end airplane detection using transfer learning in remote sensing images,” Remote Sens., vol. 10, no. 1, pp. 1–15, 2018.
KAGGLE, “Planes Dataset.” [Online]. Available: https://www.kaggle.com/rhammell/planesnet/data.
M. E. Sertkaya, “Derin Öğrenme Tekniklerinin Biyomedikal İmgeler Üzerine Uygulamaları,” Fırat Universitesi, 2018.
N. E. Öçer and U. Avdan, “Uzaktan Algılanmış Görüntülerde Obje Tespitinde ve Sınıflandırılmasında Derin Öğrenme Temelli Yaklaşım,” in IX. TUFUAB Teknik Sempozyumu, 2017, pp. 140–143.
M. Toğaçar, B. Ergen, and M. E. Sertkaya, “Zatürre Hastalığının Derin Öğrenme Modeli ile Tespiti,” Fırat Üniversitesi Müh. Bil. Derg., vol. 31, no. 1, pp. 223–230, 2019.
J. Ma, J. Zhang, L. Xiao, K. Chen, and J. Wu, “Classification of Power Quality Disturbances via Deep Learning,” IETE Tech. Rev., vol. 34, no. 4, pp. 408–415, 2017.
J. R. R. Uijlings, K. E. A. van de Sande, T. Gevers, and A. W. M. Smeulders, “Selective Search for Object Recognition,” in International Journal of Computer Vision, 2013, vol. 104, no. 2, pp. 154–171.
A. Nur Omeroglu, N. Kumbasar, E. Argun Oral, and I. Y. Ozbek, “Mask R-CNN Algoritması ile Hangar Tespiti Hangar Detection with Mask R-CNN Algorithm,” in 2019 27th Signal Processing and Communications Applications Conference (SIU), 2019, pp. 1–4.
MATLAB, “Get Started with the Image Labeler,” 2019. [Online]. Available: https://uk.mathworks.com/help/vision/ug/get-started-with-the-image-labeler.html.
Copyright (c) 2020 Ferhat Uçar, Besir Dandil, Fikret Ata
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.