Dense Visual Odometry Using Genetic Algorithm
Keywords:
camera motion, genetic Algorithm, RGB-D images, static scene, visual odometryAbstract
Our work aims to estimate the camera motion mounted on the head of a mobile robot or a moving object from RGB-D images in a static scene. The problem of motion estimation is transformed into a nonlinear least squares function. Methods for solving such problems are iterative. Various classic methods gave an iterative solution by linearizing this function. We can also use the metaheuristic optimization method to solve this problem and improve results. In this paper, a new algorithm is developed for visual odometry using a sequence of RGB-D images. This algorithm is based on a genetic algorithm. The proposed iterative genetic algorithm searches using particles to estimate the optimal motion and then compares it to the traditional methods. To evaluate our method, we use the root mean square error to compare it with the based energy method and another metaheuristic method. We prove the efficiency of our innovative algorithm on a large set of images.
Downloads
References
“RGB-D SLAM Dataset and Benchmark,” TUM Department of Informatics, Technical University of Munich, Germany, 2011. Accessed: Jan. 01, 2021. [Online]. Available: http://vision.in.tum.de/data/datasets/rgbd-dataset.
F. Cheng, C. Liu, H. Wu, and M. Ai, “DIRECT SPARSE VISUAL ODOMETRY WITH STRUCTURAL REGULARITIES FOR LONG CORRIDOR ENVIRONMENTS,” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., vol. XLIII-B2-2020, pp. 757–763, Aug. 2020, doi: 10.5194/isprs-archives-XLIII-B2-2020-757-2020.
S. T. Khawase, S. D. Kamble, N. V. Thakur, and A. S. Patharkar, “An Overview of Block Matching Algorithms for Motion Vector Estimation,” Gopeshwar, Uttrakhand, India, Jun. 2017, pp. 217–222. doi: 10.15439/2017R85.
M. Ghaffari, W. Clark, A. Bloch, R. M. Eustice, and J. W. Grizzle, “Continuous Direct Sparse Visual Odometry from RGB-D Images,” . In Proceedings of the Robotics: Science and Systems Conference, Freiburg, Germany, Jun. 2019, doi: 10.48550/ARXIV.1904.02266.
N. Zhang and Y. Zhao, “Fast and Robust Monocular Visua-Inertial Odometry Using Points and Lines,” Sensors, vol. 19, no. 20, p. 4545, Oct. 2019, doi: 10.3390/s19204545.
B. Canovas, M. Rombaut, A. Negre, D. Pellerin, and S. Olympieff, “Speed and Memory Efficient Dense RGB-D SLAM in Dynamic Scenes,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, Oct. 2020, pp. 4996–5001. doi: 10.1109/IROS45743.2020.9341542.
A. I. Comport, E. Malis, and P. Rives, “Real-time Quadrifocal Visual Odometry,” The International Journal of Robotics Research, vol. 29, no. 2–3, pp. 245–266, Feb. 2010, doi: 10.1177/0278364909356601.
B. D. Lucas and T. Kanade, “An Iterative Image Registration Technique with an Application to Stereo Vision,” Proceedings of the 7th International Joint Conference on Artificial Intelligence (IJCAI-81), Vancouver, BC, Canada, pp. 674–679, Aug. 1981.
P. J. Besl and N. D. McKay, “Method for registration of 3-D shapes,” in Sensor Fusion IV: Control Paradigms and Data Structures, Boston, MA, USA, Apr. 1992, vol. 1611, pp. 586–606. doi: 10.1117/12.57955.
A. Dib and F. Charpillet, “Robust dense visual odometry for RGB-D cameras in a dynamic environment,” in 2015 International Conference on Advanced Robotics (ICAR), Istanbul, Turkey, Jul. 2015, pp. 1–7. doi: 10.1109/ICAR.2015.7298210.
T. Whelan, H. Johannsson, M. Kaess, J. J. Leonard, and J. McDonald, “Robust real-time visual odometry for dense RGB-D mapping,” in 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, May 2013, pp. 5724–5731. doi: 10.1109/ICRA.2013.6631400.
F. Steinbrucker, J. Sturm, and D. Cremers, “Real-time visual odometry from dense RGB-D images,” in 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), Barcelona, Spain, Nov. 2011, pp. 719–722. doi: 10.1109/ICCVW.2011.6130321.
C. Kerl, J. Sturm, and D. Cremers, “Robust odometry estimation for RGB-D cameras,” in 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, May 2013, pp. 3748–3754. doi: 10.1109/ICRA.2013.6631104.
E. Cuevas, D. Zaldívar, M. Pérez-Cisneros, H. Sossa, and V. Osuna, “Block matching algorithm for motion estimation based on Artificial Bee Colony (ABC),” Applied Soft Computing, vol. 13, no. 6, pp. 3047–3059, Jun. 2013, doi: 10.1016/j.asoc.2012.09.020.
M. Shahbazi, G. Sohn, J. Théau, and P. Ménard, “ROBUST SPARSE MATCHING AND MOTION ESTIMATION USING GENETIC ALGORITHMS,” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., vol. XL-3/W2, pp. 197–204, Mar. 2015, doi: 10.5194/isprsarchives-XL-3-W2-197-2015.
M. Tagliasacchi, “A genetic algorithm for optical flow estimation,” Image and Vision Computing, vol. 25, no. 2, pp. 141–147, Feb. 2007, doi: 10.1016/j.imavis.2006.01.021.
A. Sehgal, A. Singandhupe, H. M. La, A. Tavakkoli, and S. J. Louis, “Lidar-Monocular Visual Odometry with Genetic Algorithm for Parameter Optimization,” in Advances in Visual Computing, Springer International Publishing., vol. 11845, Cham, Suisse: Springer International Publishing, 2019, pp. 358–370. Accessed: Sep. 11, 2022. [Online]. Available: http://link.springer.com/10.1007/978-3-030-33723-0_29
Y. K. Yu, K. H. Wong, and M. M. Y. Chang, “Pose Estimation for Augmented Reality Applications Using Genetic Algorithm,” IEEE Trans. Syst., Man, Cybern. B, vol. 35, no. 6, pp. 1295–1301, Dec. 2005, doi: 10.1109/TSMCB.2005.850164.
C.-F. Chao, M.-H. Horng, and Y.-C. Chen, “Motion Estimation Using the Firefly Algorithm in Ultrasonic Image Sequence of Soft Tissue,” Computational and Mathematical Methods in Medicine, vol. 2015, pp. 1–8, 2015, doi: 10.1155/2015/343217.
Y. K. Baik, J. Kwon, H. S. Lee, and K. M. Lee, “Geometric particle swarm optimization for robust visual ego-motion estimation via particle filtering,” Image and Vision Computing, vol. 31, no. 8, pp. 565–579, Aug. 2013, doi: 10.1016/j.imavis.2013.04.004.
A. Kostusiak and P. Skrzypczyński, “On the Efficiency of Population-Based Optimization in Finding Best Parameters for RGB-D Visual Odometry,” JAMRIS, vol. 13, no. 2, pp. 5–14, Jul. 2019, doi: 10.14313/JAMRIS/2-2019/13.
A. Dib, “Vers un système de capture du mouvement humain en 3D pour un robot mobile évoluant dans un environnement encombré,” Doctorat en Intelligence artificielle, Université de Lorraine, Nancy, France, 2016. [Online]. Available: https://hal.inria.fr/tel-01333772
Y. Ahmine, G. Caron, F. Chouireb, and E. M. Mouaddib, “Continuous Scale-Space Direct Image Alignment for Visual Odometry From RGB-D Images,” IEEE Robot. Autom. Lett., vol. 6, no. 2, pp. 2264–2271, Apr. 2021, doi: 10.1109/LRA.2021.3061309.
J. L. Blanco-Claraco, “A tutorial on SE (3) transformation parameterizations and on-manifold optimization,” Perception and Robotics Group, University of Malaga, Spain, 2021, doi: 10.48550/ARXIV.2103.15980.
S. Ahuja, “Lie Algebra to Lie Group Mapping,” 2015. Accessed: Jan. 01, 2021. [Online]. Available: https://math.stackexchange.com/questions/1312314/lie-algebra-to-lie-group-mapping
E.-G. Talbi, Metaheuristics, John Wiley&Sons., vol. 74. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2009. doi: 10.1002/9780470496916.
E. H. Adelson, C. H. Anderson, J. R. Bergen, P. J. Burt, and J. M. Ogden, “Pyramid methods in image processing,” Radio Corporation of America Engineer , Princeton, New Jersey, USA, vol. 29, no. 6, pp. 33–41, 1984.
E. N. Eriksen, “Monocular Visual Odometry for Underwater Navigation,” TTK4900 – Master thesis, Norwegian University of Science and Technology, Trondheim, Norvège, 2020.
J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers, “A benchmark for the evaluation of RGB-D SLAM systems,” in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, Oct. 2012, pp. 573–580. doi: 10.1109/IROS.2012.6385773.
W. Chen et al., “An Overview on Visual SLAM: From Tradition to Semantic,” Remote Sensing, vol. 14, no. 13, p. 3010, Jun. 2022, doi: 10.3390/rs14133010.
D. Prokhorov, D. Zhukov, O. Barinova, K. Anton, and A. Vorontsova, “Measuring robustness of Visual SLAM,” in 2019 16th International Conference on Machine Vision Applications (MVA), Tokyo, Japan, May 2019, pp. 1–6. doi: 10.23919/MVA.2019.8758020.
Robert Roberts, Daniel Taylor, Juan Herrera, Juan Castro, Mette Christensen. Integrating Virtual Reality and Machine Learning in Education. Kuwait Journal of Machine Learning, 2(1). Retrieved from http://kuwaitjournals.com/index.php/kjml/article/view/175
Mr. Dharmesh Dhabliya, Dr.S.A.Sivakumar. (2019). Analysis and Design of Universal Shift Register Using Pulsed Latches . International Journal of New Practices in Management and Engineering, 8(03), 10 - 16. https://doi.org/10.17762/ijnpme.v8i03.78
Mandal, D., Shukla, A., Ghosh, A., Gupta, A., & Dhabliya, D. (2022). Molecular dynamics simulation for serial and parallel computation using leaf frog algorithm. Paper presented at the PDGC 2022 - 2022 7th International Conference on Parallel, Distributed and Grid Computing, 552-557. doi:10.1109/PDGC56933.2022.10053161 Retrieved from www.scopus.com
Downloads
Published
How to Cite
Issue
Section
License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.