Identifying Complex Human Actions with a Hierarchical Feature Reduction and Deep Learning-Based Approach

Authors

  • Lakshmi Alekhya Jandhyam, Ragupathy Rengaswamy, Narayana Satyala

Keywords:

Identification of human actions, Deep neural networks, Oriented gradient histogram, HOGG, Skeletal model, Extracting features, Time-spatial data.

Abstract

Among computer vision's most appealing and useful research areas is automated human activity recognition. In these systems, look and movement patterns in video clips are used to classify human behaviour. Nevertheless, the majority of previous research has either ignored or failed to employ time data to predict action identification in video sequences using standard techniques and classical neural networks. On the other hand, reliable and precise human action recognition requires a significant processing cost. To get over the challenges of the pre-processing stage, in this work, we choose a sample of frames at random from the input sequences. We only take the most noticeable elements from the representative frame rather than the entire set of attributes. We suggest a hierarchical approach in which bone modelling and a deep neural network are used first, followed by background reduction and HOG. For selecting features and historical data retention, a CNN along with LSTM recurrent network combo is taken into consideration; in the end, a SoftMax-KNN classification is employed to detect the human behaviours. The name of our model is represented by the abbreviation HFR-DL, which stands for a hierarchical Features Lowering & Deep Learning-based action detection approach. We utilize the UCF101 dataset, which is popular among action recognition researchers, for benchmarking in order to assess the suggested approach. There are 101 challenging tasks in the wild included in the dataset. When comparing the experimental results with eight cutting-edge methods, significant improvements in speed and accuracy are seen.

Downloads

Download data is not yet available.

References

Angel B, Miguel L, Antonio J, Gonzalez-Abril L. Mobile activity recognition and fall detection system for elderly people using ameva algorithm. Pervasive Mob Comput. 2017;34:3–13.

Anuradha K, Anand V, Raajan NR. Identification of human actor in various scenarios by applying background modeling. Multimed Tools Appl. 2019;79:3879–91.

Bajaj P, Pandey M, Tripathi V, Sanserwal V. Efficient motion encoding technique for activity analysis at ATM premises. In progress in advanced computing and intelligent engineering. Berlin: Springer; 2019. p. 393–402.

Chaquet JM, Carmona EJ, Fernández-Caballero A. A survey of video datasets for human action and activity recognition. ComputVis Image Underst. 2013;117:633–59.

Chavarriaga R, Sagha H, Calatroni A, Digumarti S, Tröster G, Millán J, Roggen D. The opportunity challenge: a benchmark database for on-body sensor-based activity recognition. Pattern Recognit Lett. 2013;34:2033–42.

Chen BH, Shi LF, Ke X. A robust moving object detection in multi-scenario big data for video surveillance. IEEE Trans Circuits Syst Video Technol. 2018;29(4):982–95.

Dedeoğlu Y, Töreyin BU, Güdükbay U, Çetin AE. Silhouettebased method for object classification and human action recognition in video. In European conference on computer vision. Berlin: Springer; 2006. p. 64–77.

Donahue J, Hendricks LA, Guadarrama S, Rohrbach M, Venugopalan S, Saenko K, Darrell T. Long-term recurrent convolutional networks for visual recognition and description. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2015; pp. 2625–34.

Ehatisham-Ul-Haq M, Javed A, Azam MA, Malik HM, Irtaza A, Lee IH, Mahmood MT. Robust human activity recognition using multimodal feature-level fusion. IEEE Access. 2019;7:60736–51.

Fernando B, Anderson P, Hutter M, Gould S. Discriminative hierarchical rank pooling for activity recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016, pp. 1924–32.

Gammulle H, Denman S, Sridharan S, Fookes C. Two stream LSTM: a deep fusion framework for human action recognition. In: Applications of Computer Vision (WACV), 2017 IEEE Winter Conference. 2017.

Hegde N, Bries M, Swibas T, Melanson E, Sazonov E. Automatic recognition of activities of daily living utilizing insole based and wrist worn wearable sensors. IEEE J Biomed Health Inform. 2017;22:979–88.

Huan RH, Xie CJ, Guo F, Chi KK, Mao KJ, Li YL, Pan Y. Human action recognition based on HOIRM feature fusion and AP clustering BOW. Plos One. 2019;14:e019910.

Jain A, Zamir AR, Savarese S, Saxena A. Structural-RNN: Deep learning on spatio-temporal graphs. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016, pp. 5308–17.

Ke S, Thuc H, Lee Y, Hwang J, Yoo J, Choi K. A review on videobased human activity recognition. Computers. 2013;2:88–131.

Keyvanpour M, Serpush F. ESLMT: a new clustering method for biomedical document retrieval. Biomed Eng. 2019;64(6):729–41.

Khaire P, Kumar P, Imran J. Combining CNN streams of RGB-D and skeletal data for human activity recognition. Pattern Recognit Lett. 2018;115:107–16.

Kumari P, Mathew L, Syal P. Increasing trend of wearables and multimodal interface for human activity monitoring: a review. Biosens Bioelectron. 2017;90:298–307.

Li X, Chuah MC. Rehar: Robust and efficient human activity recognition. In: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), 2018; pp. 362–71.

Liu AA, Xu N, Nie WZ, Su YT, Zhang YD. Multi-domain and multi-task learning for human action recognition. IEEE Trans Image Process. 2019;28(2):853–67.

Lohit S, Bansal A, Shroff N, Pillai J, Turaga P, Chellappa R. Predicting dynamical evolution of human activities from a single image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2018; pp.383–92.

Ma CY, Chen MH, Kira Z, AlRegib G. TS-LSTM and temporal inception: exploiting spatiotemporal dynamics for activity recognition. Signal Process. 2019;71:76–877.

Ma S, Sigal L, Sclaroff S. Learning activity progression in LSTMs for activity detection and early detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016, pp. 1942–50.

Mahasseni B, Todorovic S. Regularizing long short term memory with 3D human-skeleton sequences for action recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016, pp. 3054–62.

Majd M, Safabakhsh R. Correlational convolutional LSTM for human action recognition. Neurocomputing. 2020;396:224–9.

Molchanov P, Yang X, Gupta S, Kim K, Tyree S, Kautz J. Online detection and classification of dynamic hand gestures with recurrent 3D convolutional neural network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016, pp. 4207–15.

Montes A, Salvador A, Pascual S, Giro-i Nieto X. Temporal activity detection in untrimmed videos with recurrent neural networks. 2016;1–5.

Núñez JC, Cabido R, Pantrigo JJ, Montemayor AS, Vélez JF. Convolutional neural networks and long short-term memory for skeleton-based human activity and hand gesture recognition. Pattern Recognit. 2018;76:80–94.

Park E, Han X, Berg T, Berg AC. Combining multiple sources of knowledge in deep CNNs for action recognition. In: Applications of Computer Vision (WACV), 2016 IEEE Winter Conference. 2016, pp. 1–8.

Patel C, Garg S, Zaveri T, Banerjee A, Patel R. Human action recognition using fusion of features for unconstrained video sequences. Comput Electr Eng. 2018;70:284–301.

Pham HH, Khoudour L, Crouzil A, Zegers P, Velastin SA. Exploiting deep residual networks for human action recognition from skeletal data. Comput Vis Image Underst. 2018;170:51–66.

Rahmani H, Mian A, Shah M. Learning a deep model for human action recognition from novel viewpoints. IEEE Trans Pattern Anal Mach Intell. 2018;40:667–81.

Rezaei M, Azarmi M. Deep-SOCIAL: social distancing monitoring and infection risk assessment in COVID-19 pandemic. Appl Sci. 2020;10:1–29.

Rezaei M, Fasih A. A hybrid method in driver and multisensory data fusion, using a fuzzy logic supervisor for vehicle intelligence. In: Sensor Technologies and Applications, IEEE International Conference. 2007, pp. 393–98.

Rezaei M, Klette R. Look at the driver, look at the road: No distraction! no accident! In: 2014 IEEE Conference on Computer Vision and Pattern Recognition. 2014, pp. 129–36.

Rezaei M, Shahidi M. Zero-shot learning and its applications from autonomous vehicles to COVID-19 diagnosis: a review. Intell-Based Med. 2020;3–4:1–27. https ://doi.org/10.1016/j.ibmed.2020.10000 5.

Ronao CA, Cho SB. Deep convolutional neural networks for human activity recognition with smartphone sensors. International Conference on Neural Information Processing. Cham: Springer; 2015. p. 46–53.

Sargano AB, Wang X, Angelov P, Habib Z. Human action recognition using transfer learning with deep representations. In: 2017 International joint conference on neural networks (IJCNN). IEEE. 2017, pp. 463–69.

Schneider B, Banerjee T. Activity recognition using imagery for smart home monitoring. Advances in soft computing and machine learning in image processing. Berlin: Springer; 2018. p. 355–71.

Shahroudy A, Ng T, Gong Y, Wang G. Deep multimodal feature analysis for action recognition in rgb+ d videos. IEEE Trans Pattern Anal Mach Intell. 2018;40:1045–58.

Sharif M, Khan MA, Zahid F, Shah JH, Akram T. Human action recognition: a framework of statistical weighted segmentation and rank correlation-based selection. Pattern Anal Appl. 2019;23:281–94.

Sharma S, Kiros R, Salakhutdinov R. Action recognition using visual attention. Int Conference ICLR, 2016; pp. 1–11.

Singh B, Marks TK, Jones M, Tuzel O, Shao M. A multi-stream bi-directional recurrent neural network for fine-grained action detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016;pp. 1961–70

Singh R, Kushwaha AKS, Srivastava R. Multi-view recognition system for human activity based on multiple features for video surveillance system. Multimed Tools Appl. 2019;78:17168–9.

Soomro K, Zamir AR, Shah M. UCF101: A dataset of 101 human actions classes from videos in the wild. 2012;1–7. arXiv:1212.0402.

Tao L, Volonakis T, Tan B, Jing Y, Chetty K, Smith M. Home activity monitoring using low resolution infrared sensor. 2018;1–8. https ://arxiv .org abs/1811.05416 .

Tu Z, Xie W, Qin Q, Poppe R, Veltkamp R, Li B, Yuan J. Learning representations based on human-related regions for action recognition. Pattern Recognit. 2018;79:32–433.

Turaga P, Chellappa R, Subrahmanian VS, Udrea O. Machine recognition of human activities: a survey. IEEE Trans Circuits Syst Video. 2008;18:1473.

Ullah A, Muhammad K, Ser J, Baik SW, Albuquerque V. Activity recognition using temporal optical flow convolutional features and multi-layer LSTM. IEEE Trans Ind Electron. 2018;66:9692–702.

Ullah JA, Muhammad K, Sajjad M, Baik SW. Action recognition in video sequences using deep Bi-directional LSTM with CNN features. IEEE Access. 2018;6:1155–66.

Varior RR, Haloi M, Wang G. Gated siamese convolutional neural network architecture for human re-identification. In European Conference on Computer Vision. Cham: Springer; 2016. p. 791–808.

Wang X, Gao L, Song J, Shen H. Beyond frame-level CNN: saliency-aware 3-d cnn with LSTM for video action recognition. IEEE Signal Process Lett. 2017;24:510–4.

Wang X, Gao L, Song J, Zhen X, Sebe N, Shen H. Deep appearance and motion learning for egocentric activity recognition. Neurocomputing. 2018;275:438–47.

Wang X, Gao L, Wang P, Sun X, Liu X. Two-stream 3-D convNet fusion for action recognition in videos with arbitrary size and length. IEEE Trans Multimed. 2018;20:634–44.

Wang Y, Lu Q, Wang D, Liu W. Compressive background modeling for foreground extraction. J Electr Comput Eng. 2015;2015:1–9.

Wang Y, Wang S, Tang J, O’Hare N, Chang Y, Li B. Hierarchical attention network for action recognition in videos. 2016; arXiv :1607.06416 . pp. 1–9.

Wu H, Liu J, Zha ZJ, Chen Z, Sun X. Mutually reinforced spatio- temporal convolutional tube for human action recognition. In: IJCAI. 2019;pp. 968–74.

Wu Y, Li J, Kong Y, Fu Y. Deep convolutional neural network with independent Softmax for large scale face recognition. In: Proceedings of the 2016 ACM on Multimedia Conference. 2016;pp. 1063–67.

Ye J, Qi G, Zhuang N, Hu H, Hua KA. Learning compact features for human activity recognition via probabilistic first-takeall. IEEE Trans Pattern Anal Mach Intell. 2018;42:126–39.

Zeng R, Wu J, Shao Z, Senhadji L, Shu H. Quaternion softmax classifier. Electron Lett. 2014;50:1929–31.

Zhou Q, Zhong B, Zhang Y, Li J, Fu Y. Deep alignment network based multi-person tracking with occlusion and motion reasoning. IEEE Trans Multimed. 2018;21(5):1183–94.

Zhu G, Zhang L, Shen P, Song J. Multimodal gesture recognition using 3-D convolution and convolutional LSTM. IEEE Access. 2017;5:4517–24.

Ziaeefard M, Bergevin R. Semantic human activity recognition: A literature review. Pattern Recognit. 2015;48:2329–45.

Zolfaghari M, Singh K, Brox T. Eco: Efficient convolutional network for online video understanding. In: In Proceedings of the European Conference on Computer Vision (ECCV). 2018; pp.695–712.

Downloads

Published

26.03.2024

How to Cite

Lakshmi Alekhya Jandhyam, Ragupathy Rengaswamy, Narayana Satyala. (2024). Identifying Complex Human Actions with a Hierarchical Feature Reduction and Deep Learning-Based Approach. International Journal of Intelligent Systems and Applications in Engineering, 12(21s), 789–801. Retrieved from https://ijisae.org/index.php/IJISAE/article/view/5474

Issue

Section

Research Article