Generative Adversial Networks for Enhanced Visual Localization in Autonomous Systems
Keywords:
Localization, Autonomous systems, GAN, Binarized Spiking Neural Networks, KITTIAbstract
Autonomous system localization is pivotal for determining their position within their environment. While the Global Positioning System (GPS) is a widely used method, its limitations, such as imprecise pose estimation, necessitate alternative approaches. Visual localization is one such approach that localize the system with images captured by cameras, offers a promising solution. In this research, we employ Generative Networks and Deep Learning techniques to calculate Autonomous system positions relative to the world. Landmarks are detected using generative networks, and the autonomous system is localized using binarized spiking neural networks based on the identified landmarks. The proposed model achieves a mean Intersection over Union (mIoU) score of 0.85, showcasing a 6.25% improvement over existing models. The presented framework enhances system localization accuracy, minimizing pose errors in both outdoor environments and GPS-denied locations.
Downloads
References
Siegwart, R., Nourbakhsh, I. R., & Scaramuzza, D. (2011). Introduction to autonomous mobile robots. MIT press.
Dobriborsci, D., Kapitonov, A., & Nikolaev, N. (2017, July). The basics of the identification, localization and navigation for mobile robots. In 2017 International Conference on Information and Digital Technologies (IDT) (pp. 100-105). IEEE.
Zhang, T., Li, Q., Zhang, C. S., Liang, H. W., Li, P., Wang, T. M., & Wu, C. (2017). Current trends in the development of intelligent unmanned autonomous systems. Frontiers of information technology & electronic engineering, 18, 68-85.
Tao, Z., Bonnifait, P., Fremont, V., & Ibanez-Guzman, J. (2013, November). Mapping and localization using GPS, lane markings and proprioceptive sensors. In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 406-412). IEEE.
Conduraru, I., Doroftei, I., & Conduraru, A. (2014). Localization methods for mobile robots-a review. Advanced Materials Research, 837, 561-566.
Panigrahi, P. K., & Bisoy, S. K. (2022). Localization strategies for autonomous mobile robots: A review. Journal of King Saud University-Computer and Information Sciences, 34(8), 6019-6039.6. Everett, H. R. (1995). Sensors for mobile robots. CRC Press.
Cenkeramaddi, L. R., Bhatia, J., Jha, A., Vishkarma, S. K., & Soumya, J. (2020, November). A survey on sensors for autonomous systems. In 2020 15th IEEE Conference on Industrial Electronics and Applications (ICIEA) (pp. 1182-1187). IEEE.
Couturier, A., & Akhloufi, M. A. (2021). A review on absolute visual localization for UAV. Robotics and Autonomous Systems, 135, 103666.
Liu, X., Ballal, T., & Al-Naffouri, T. Y. (2019, September). GNSS- based localization for autonomous vehicles: Prospects and challenges. In Proc. 27th Eur. Signal Process. Conf. (EUSIPCO) (pp. 2-6).
Pendleton, S. D., Andersen, H., Du, X., Shen, X., Meghjani, M., Eng, Y. H., . & Ang Jr, M. H. (2017). Perception, planning, control, and coordination for autonomous vehicles. Machines, 5(1), 6.
Huang, S., & Dissanayake, G. (1999). Robot localization: An introduction. Wiley Encyclopedia of Electrical and Electronics Engineering, 1-10.
Scaramuzza, D., & Fraundorfer, F. Visual Odometry [Tutorial]. IEEE Robot. Autom. Mag. 2011, 18, 80–92. oi:10.1109/MRA.2011.943233.
Krombach, N., Droeschel, D., Houben, S., & Behnke, S. (2018). Feature-based visual odometry prior for real-time semi-dense stereo SLAM. Robotics and Autonomous Systems, 109, 38-58.
Aladem, M., & Rawashdeh, S. A. (2018). Lightweight visual odometry for autonomous mobile robots. Sensors, 18(9), 2837.
An, L., Zhang, X., Gao, H., & Liu, Y. (2017). Semantic segmentation–aided visual odometry for urban autonomous driving. International Journal of Advanced Robotic Systems, 14(5), 1729881417735667.
Pandey, T., Pena, D., Byrne, J., & Moloney, D. (2021). Leveraging deep learning for visual odometry using optical flow. Sensors, 21(4), 1313.
Kim, S., Kim, I., Vecchietti, L. F., & Har, D. (2020). Pose estimation utilizing agated recurrent unit network for visual localization. Applied Sciences, 10(24), 8876.
Li, G., Yu, 9L. and Fei, S., 2021. A deep-learning real-time visual SLAM system based on multi-task feature extraction network and self-supervised featurepoints. Measurement, 168, p.108403.
Scaramuzza, D., & Zhang, Z. (2019). Visual-inertial odometry of aerial robots. arXiv preprint arXiv:1906.03289.
Almalioglu, Y., Turan, M., Saputra, M. R. U., de Gusmão, P. P., Markham, A., & Trigoni, N. (2022). SelfVIO: Self-supervised deep monocular Visual–Inertial Odometry and depth estimation. Neural Networks, 150, 119-136.
Bloesch, M., Omari, S., Hutter, M., & Siegwart, R. (2015, September). Robust visual inertial odometry using a direct EKF- based approach. In 2015 IEEE/RSJ international conference on intelligent robots and systems (IROS) (pp. 298-304). IEEE.
Li, C., Wang, S., Zhuang, Y. and Yan, F., (2019). Deep sensor fusion between 2D laser scanner and IMU for mobile robot localization. IEEE Sensors Journal, 21(6), pp.8501-8509.
Chhikara, P., Tekchandani, R., Kumar, N., Chamola, V. and Guizani, M., (2020). DCNN-GA: A deep neural net architecture for navigation of UAV in indoor environment. IEEE Internet of Things Journal, 8(6), pp.4448-4460.
Hu, H., Qiao, Z., Cheng, M., Liu, Z. and Wang, H., (2020). Dasgil: Domain adaptation for semantic and geometric-aware image- based localization. IEEE Transactions on Image Processing, 30, pp.1342-1353.
Chen, X., Läbe, T., Milioto, A., Röhling, T., Behley, J. and Stachniss, C., (2022). OverlapNet: a siamese network for computing LiDAR scan similarity with applications to loop closing and localization. Autonomous Robots, 46(1), pp.61-81.
Wen, S., Zhao, Y., Yuan, X., Wang, Z., Zhang, D. and Manfredi, L., 2020. Path planning for active SLAM based on deep reinforcement learning under unknown environments. Intelligent Service Robotics, 13(2), pp.263-272.
Geiger, A., Lenz, P., Stiller, C., & Urtasun, R. (2013). Vision meets robotics: The kitti dataset. The International Journal of Robotics Research, 32(11), 1231-1237.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 S. Sindhu, M. Saravanan
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.