A PointNet Application for Semantic Classification of Ramps in Search and Rescue Arenas

Authors

DOI:

https://doi.org/10.18201/ijisae.2019355378

Keywords:

deep learning, PointNet, NIST ramps, mobile robot, 3D Point Cloud

Abstract

Search and rescue environments could be dangerous for humans due to risks for potential structural breaking down and leakage of hazardous materials. Using robots in these environments can be appropriate solution to reduce these risks. The National Institute of Standards and Technology (NIST) proposed reference test areas for measuring autonomous navigation capabilities of mobile robots. In this paper, we present a PointNet application for semantic classification of ramps through point cloud data in reference test arenas. Since the walls and terrain carry important semantic information for robot navigation, they are also considered. The previous studies that address the semantic classification problem mostly used image and/or 2D laser range data. However, the image data may not be suitable for dusty and poorly lightened search and rescue environments and 2D laser range data may not represent 3D geometry of the objects. Since point cloud data have the ability to describe 3D geometry and it is not affected by the negative aspects of these environments, it could be appropriate to classify ramps, walls, and terrain. Eskisehir Osmangazi University (ESOGU) laboratory building is modelled in GAZEBO simulation environment and a Pioneer P3-AT mobile robot with Asus Xtion Pro sensor is launched. Then, the ESOGU RAMPS dataset is generated through navigating the robot via Robot Operation System (ROS). The dataset involves two types of ramp (inclined and flat), terrain and wall classes. The PointNet is applied to train and test the dataset. The metric and visual PointNet results are presented to analyze the classification performance.

Downloads

Download data is not yet available.

References

Kitano, Hiroaki & Tadokoro, Satoshi, “RoboCup Rescue: A Grand Challenge for Multiagent and Intelligent Systems,” AI Magazine, 22. 39-52, 2001.

A. Jacoff, E. Messina, B. A. Weiss, S. Tadokoro and Y. Nakagawa, "Test arenas and performance metrics for urban search and rescue robots," IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 2003, pp. 3396-3403, 2003.

S. Raymond, S. Schwertfeger and V. Arnoud, “16 Years of RoboCup Rescue,” Künstliche Intelligenz, 30(3), pp. 267-277, 2016.

https://www.nist.gov/sites/default/files/documents/el/isd/ks/DHS_NIST_ASTM_Robot_Test_Methods-2.pdf

http://gazebosim.org/

https://www.ros.org/

R. Q. Charles, H. Su, M. Kaichun and L. J. Guibas, “PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, pp. 77-85,2017.

E., Grilli, F. Menna, & F. Remondino, “A review of point clouds segmentation and classification algorithms,” International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives 42(2W3), 339–344. issn: 16821750, 2017.

T. Rabbani, F. Van Den Heuvel, & G. Vosselmann, “Segmentation of point clouds using smoothness constraint,” International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. 36(5), pp. 248-253, 2006.

P. J. Besl and R. C. Jain, "Segmentation through variable-order surface fitting," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 10, no. 2, pp. 167-192, March 1988.

M. A. Fischler, & R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, Vol. 24(6), pp. 381-395, 1981.

D. H. Ballard, “Generalizing the Hough transform to detect arbitrary shapes,” Pattern Recognition, Vol. 13(2), pp. 183-194, 1991.

G. Lavoue’, F. Dupont, and A. Baskurt, “A new CAD mesh segmentation method, based on curvature tensor analysis,” Computer-Aided Design, Vol. 37(10), pp. 975-987, 2005.

R. B. Rusu and S. Cousins, “3D is here: Point Cloud Library (PCL),” in IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, May 9-13 2011.

B. Kaleci, “İç ortamlarda anlamsal tabanlı keşif algoritmalarının geliştirilmesi,” Ph. D Thesis, Dept. Elec. Electronic Eng., ESOGU Univ., Turkey, 2016.

A. Nguyen and B. Le, "3D point cloud segmentation: A survey," 2013 6th IEEE Conference on Robotics, Automation and Mechatronics (RAM), Manila, 2013, pp. 225-230. doi: 10.1109/RAM.2013.6758588

H. Su, S. Maji, E. Kalogerakis, and E. G. Learned-Miller, “Multi-view convolutional neural networks for 3D shape recognition,” In Proc. ICCV, to appear, 2015.

Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao. “3D ShapeNets: A deep representation for volumetric shapes,” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, 2015, pp. 1912-1920.

D. Maturana and S. Scherer, “VoxNet: A 3D Convolutional Neural Network for real-time object recognition,” 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, 2015, pp. 922-928.

Y. Li, S. Pirk, H. Su, C. R. Qi, and L. J. Guibas. “Fpnn: Field probing neural networks for 3D data,” arXiv preprint arXiv:1605.06240, 2016.

J. Huang and S. You, “Point cloud labeling using 3D Convolutional Neural Network,” 2016 23rd International Conference on Pattern Recognition (ICPR), Cancun, 2016, pp. 2670-2675.

F. Monti, D. Boscaini, J. Masci, E. Rodolà, J. Svoboda and M. M. Bronstein, “Geometric Deep Learning on Graphs and Manifolds Using Mixture Model CNNs,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, 2017, pp. 5425-5434.

J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun. “Spectral networks and locally connected networks on graphs,” arXiv preprint arXiv:1312.6203, 2013.

M. M. Bronstein, J. Bruna, Y. LeCun, A. Szlam and P. Vandergheynst, “Geometric Deep Learning: Going beyond Euclidean data,” in IEEE Signal Processing Magazine, vol. 34, no. 4, pp. 18-42, July 2017.

J. Masci, D. Boscaini, M. M. Bronstein and P. Vandergheynst, “Geodesic Convolutional Neural Networks on Riemannian Manifolds,” 2015 IEEE International Conference on Computer Vision Workshop (ICCVW), Santiago, 2015, pp. 832-840.

C. R. QI, L. YI, H. SU, L. J. GUIBAS, “Pointnet++: Deep hierarchical feature learning on point sets in a metric space,” arXiv preprint arXiv:1706.02413, 2017.

F. Engelmann, T. Kontogianni, A. Hermans and B. Leibe, “Exploring Spatial Context for 3D Semantic Segmentation of Point Clouds,” 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), Venice, 2017, pp. 716-724.

http://wiki.ros.org/hector_nist_arenas_gazebo

Downloads

Published

30.09.2019

How to Cite

Turgut, K., & Kaleci, B. (2019). A PointNet Application for Semantic Classification of Ramps in Search and Rescue Arenas. International Journal of Intelligent Systems and Applications in Engineering, 7(3), 159–165. https://doi.org/10.18201/ijisae.2019355378

Issue

Section

Research Article