A Systematic Review of various Fusion Techniques for Human Activity Recognition

Authors

  • Sandeep Kaur Gill Department of Computational Sciences Maharaja Ranjit Singh Punjab Technical University, Bathinda
  • Anju Sharma Department of Computational Sciences Maharaja Ranjit Singh Punjab Technical University, Bathinda

Keywords:

Human Activity Recognition, Dimensionality Reduction, Data Fusion, Feature Fusion, Classifier Fusion

Abstract

Human Activity Recognition (HAR) has fetched considerable prominence as it plays a critical role in a wide number of applications ranging from healthcare monitoring to human-computer interaction. Gaining accuracy as well as efficiency in the process of representing and recognizing an activity is one of the critical goals in the domain. Apart from developing via technical perspective, utilizing the resources and technicalities in hand to the fullest is another significant criterion to gain accuracy and efficiency in any process. Embedding multiplicity in the sub-tasks via the fusion of multiple sources is one of the options to ensure that the resources enrolled in the task are being utilized effectively and to the fullest. In HAR, fusion could be considered from three perspectives, namely, data fusion, feature fusion and classifier fusion. In this paper, a survey of research work that implemented fusion from any of the three perspectives in the process of recognizing the activity has been generated. Apart from embedding multiplicity via fusion of each criterion on an individual basis, multiplicity could be embedded in the domain via the perspective of number of modes of fusion as well. The review also presents the work that implemented fusion via multiple criteria to optimize the process of recognizing the activity being executed. Section 1 generates an overview of technicalities in hand to represent and recognize the activity and justifies the criteria of embedding multiplicity in the process of activity recognition, Section 2 discusses three modes of fusion to embed multiplicity and gain both accuracy as well as efficiency in the process of HAR and Section 3 gives an overview of open research issues and finally Section 4 justifies the importance of the criteria of fusion.

Downloads

Download data is not yet available.

References

Abdelgawad, A., & Bayoumi, M.: Resource-Aware Data Fusion Algorithms for Wireless Sensor Networks 118. Springer US., 17-35 (2012). https://doi.org/10.1007/978-1-4614-1350-9

Abid, M. H., Nahid, A.-A., Islam, Md. R., & Parvez Mahmud, M. A.: Human Activity Recognition Based on Wavelet-Based Features along with Feature Prioritization. 2021 IEEE 6th International Conference on Computing, Communication and Automation (ICCCA), 933–939 (2021). https://doi.org/10.1109/ ICCCA52192. 2021.9666294

Almaslukh, B., Artoli, A. & Al-Muhtadi J.: A Robust Deep Learning Approach for Position-Independent Smartphone-Based Human Activity Recognition. Sensors, 18(11), 3726 (2018).

Amrita, Joshi, S., Kumar, R., Dwivedi, A., Rai, V., & Chauhan, S. S.: Water wave optimized nonsubsampled shearlet transformation technique for multimodal medical image fusion. Concurrency and Computation: Practice and Experience, 35(7), e7591(2023). https://doi.org/10.1002/cpe.7591

AYDIN, I.: Fuzzy Integral and Cuckoo Search Based Classifier Fusion for Human Action Recognition. Advances in Electrical and Computer Engineering, 18(1), 3–10 (2018). https://doi.org/10.4316/AECE.2018.01001

Bagheri, M. A., Hu, G., Gao, Q., & Escalera, S.: A Framework of Multi-classifier Fusion for Human Action Recognition. 2014 22nd International Conference on Pattern Recognition, 1260–1265 (2014). https://doi.org/10.1109/ICPR.2014.226

Capela, N. A., Lemaire, E. D., & Baddour, N.: Feature Selection for Wearable Smartphone-Based Human Activity Recognition with Able bodied, Elderly, and Stroke Patients. PLOS ONE, 10(4), e0124414 (2015). https://doi.org/10.1371/journal.pone.0124414

Channi, H. K., Sandhu, R., Faiz, M., & Islam, S. M. (2023, August). Multi-Criteria Decision-Making Approach for Laptop Selection: A Case Study. In 2023 3rd Asian Conference on Innovation in Technology (ASIANCON) (pp. 1-5). IEEE.

Chaturvedi, Pooja, A. K. Daniel, and Vipul Narayan. "A Novel Heuristic for Maximizing Lifetime of Target Coverage in Wireless Sensor Networks." Advanced Wireless Communication and Sensor Networks. Chapman and Hall/CRC 227-242.

Chen, C., Jafari, R., & Kehtarnavaz, N.: A survey of depth and inertial sensor fusion for human action recognition. Multimedia Tools and Applications, 76(3), 4405–4425 (2017). https://doi.org/10.1007/s11042-015-3177-1

Chen, J., Sun, Y., & Sun, S.: Improving Human Activity Recognition Performance by Data Fusion and Feature Engineering. Sensors, 21(3), 692 (2021). https://doi.org/10.3390/s21030692

Chen, Z., Zhu, Q., Soh, Y. C., & Zhang, L.: Robust Human Activity Recognition Using Smartphone Sensors via CT-PCA and Online SVM. IEEE Transactions on Industrial Informatics, 13(6), 3070–3080 (2017). https://doi.org/10.1109/TII.2017.2712746

Chernbumroong, S., Cang, S., Atkins, A., & Yu, H.: Elderly activities recognition and classification for applications in assisted living. Expert Systems with Applications, 40(5), 1662–1674 (2013). https://doi.org/10.1016/j.eswa.2012.09.004

Chetty, G., White, M., Singh, M., & Mishra, A.: Multimodal activity recognition based on automatic feature discovery. 2014 International Conference on Computing for Sustainable Global Development (INDIACom), 632–637 (2014). https://doi.org/10.1109/IndiaCom.2014.6828039

Chetty, G., & White, M.: Body sensor networks for human activity recognition. 2016 3rd International Conference on Signal Processing and Integrated Networks (SPIN), 660–665 (2016). https://doi.org/10.1109/ SPIN.2016.7566779

Chung, S., Lim, J., Noh, K. J., Kim, G., & Jeong, H. : Sensor Data Acquisition and Multimodal Sensor Fusion for Human Activity Recognition Using Deep Learning. Sensors, 19(7), 1716 (2019). https://doi.org/ 10.3390/s19071716

Cordell, K. D., Rao, H., & Lyons, J.: Authentic Assessments: a method to detect anomalies in assessment response patterns via neural network. Health Services and Outcomes Research Methodology, 21, 439-458 (2021). https://10.1007/s10742-021-00245-9

Dalal, N., Triggs, B., & Schmid, C.: Human Detection Using Oriented Histograms of Flow and Appearance. Computer Vision – ECCV 2006, 428–441 (2006). https://doi.org/10.1007/11744047_33

Dietterich, T. G.: Ensemble Methods in Machine Learning. Multiple Classifier Systems, 1–15 (2000). https://doi.org/10.1007/3-540-45014-9_1

Elharrouss, O., Almaadeed, N., Al-Maadeed, S., Bouridane, A., & Beghdadi, A.: A combined multiple action recognition and summarization for surveillance video sequences. Applied Intelligence, 51(2), 690–712 (2021). https://doi.org/10.1007/s10489-020-01823-z

Faiz, M., & Daniel, A. K. (2023). A hybrid WSN based two-stage model for data collection and forecasting water consumption in metropolitan areas. International Journal of Nanotechnology, 20(5-10), 851-879.

Faiz, M., Sandhu, R., Akbar, M., Shaikh, A. A., Bhasin, C., & Fatima, N. (2023). Machine Learning Techniques in Wireless Sensor Networks: Algorithms, Strategies, and Applications. International Journal of Intelligent Systems and Applications in Engineering, 11(9s), 685-694.

Fortino, G., Guzzo, A., Ianni, M., Leotta, F., & Mecella, M.: Predicting activities of daily living via temporal point processes: Approaches and experimental results. Computers & Electrical Engineering, 96, 107567 (2021). https://doi.org/10.1016/j.compeleceng.2021.107567

Freund, Y., & Schapire, R. E.: A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting. Journal of Computer and System Sciences, 55(1), 119–139 (1997). https://doi.org/10.1006/ jcss.1997.1504

Gao, Z., Zhang, H., Xu, G. P., Xue, Y. B., & Hauptmann, A. G.: Multi-view discriminative and structured dictionary learning with group sparsity for human action recognition. Signal Processing, 112, 83–97 (2015). https://doi.org/10.1016/j.sigpro.2014.08.034

Ghorbel, E., Boutteau, R., Boonaert, J., Savatier, X., & Lecoeuche, S.: Kinematic Spline Curves: A temporal invariant descriptor for fast action recognition. Image and Vision Computing, 77, 60–71 (2018). https://doi.org/10.1016/j.imavis.2018.06.004

Goyani, M., & Patel, N.: Multi-Level Haar Wavelet based Facial Expression Recognition using Logistic Regression. Indian Journal of Science and Technology, 10(9), 1–9 (2017). https://doi.org/10.17485/ijst/2017/ v10i9/108944

Gravina, R., Alinia, P., Ghasemzadeh, H., & Fortino, G.: Multi-sensor fusion in body sensor networks: State-of-the-art and research challenges. Information Fusion, 35, 68–80 (2017). https://doi.org/10.1016/j.inffus.2016.09.005

Guan, Y., & Plötz, T.: Ensembles of Deep LSTM Learners for Activity Recognition using Wearables. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 1(2), 1–28 (2017). https://doi.org/10.1145/3090076

Gumaei, A., Hassan, M. M., Alelaiwi, A., & Alsalman, H.: A Hybrid Deep Learning Model for Human Activity Recognition Using Multimodal Body Sensing Data. IEEE Access, 7, 99152–99160 (2019). https://doi.org/10.1109/ACCESS.2019.2927134

Holte, M. B., Moeslund, T. B., Nikolaidis, N., & Pitas, I.: 3D Human Action Recognition for Multi-view Camera Systems. 2011 International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission, 342–349 (2011). https://doi.org/10.1109/3DIMPVT.2011.50

Huang, Y. S., & Suen, C. Y.: A method of combining multiple experts for the recognition of unconstrained handwritten numerals. IEEE Transactions on Pattern Analysis and Machine Intelligence, 17(1), 90–94 (1995). https://doi.org/10.1109/34.368145

Hussain, S. ul, & Triggs, B.: Feature Sets and Dimensionality Reduction for Visual Object Detection. Proceedings of the British Machine Vision Conference 2010, 112.1-112.10 (2010). https://doi.org/10.5244/C.24.112

Hutchinson, M., Samsi, S., Arcand, W., Bestor, D., Bergeron, B., Byun, C., Houle, M., Hubbell, M., Jones, M., Kepner, J., Kirby, A., Michaleas, P., Milechin, L., Mullen, J., Prout, A., Rosa, A., Reuther, A., Yee, C., & Gadepally, V.: Accuracy and Performance Comparison of Video Action Recognition Approaches. 2020 IEEE High Performance Extreme Computing Conference (HPEC), 1–8 (2020). https://doi.org/10.1109/ HPEC43674.2020.9286249

Ihianle, I. K., Nwajana, A. O., Ebenuwa, S. H., Otuka, R. I., Owa, K., & Orisatoki, M. O.: A Deep Learning Approach for Human Activities Recognition From Multimodal Sensing Devices. IEEE Access, 8, 179028–179038 (2020). https://doi.org/10.1109/ACCESS.2020.3027979

Ijjina, E. P., & Chalavadi, K. M.: Human action recognition in RGB-D videos using motion sequence information and deep learning. Pattern Recognition, 72, 504–516 (2017). https://doi.org/10.1016/j.patcog.2017.07.013

Islam, Md. M., Nooruddin, S., Karray, F., & Muhammad, G.: Multi-level feature fusion for multimodal human activity recognition in Internet of Healthcare Things. Information Fusion, 94, 17–31 (2023). https://doi.org/10.1016/j.inffus.2023.01.015

Islam, S., Qasim, T., Yasir, M., Bhatti, N., Mahmood, H., & Zia, M.: Single- and two-person action recognition based on silhouette shape and optical point descriptors. Signal, Image and Video Processing, 12(5), 853–860 (2018). https://doi.org/10.1007/s11760-017-1228-y

Indhumathi C., Murugan V., & Muthulakshmi G.: Spatio-Temporal Deep Feature Fusion for Human Action Recognition. International Journal of Computer Vision and Image Processing, 12(1), 1–13 (2022). https://doi.org/10.4018/IJCVIP.296584

Jain, Y., Sharma, A. K., Velmurugan, R., & Banerjee, B.: PoseCVAE: Anomalous Human Activity Detection. 2020 25th International Conference on Pattern Recognition (ICPR), 2927–2934 (2021). https://doi.org/10.1109/ICPR48806.2021.9412132

Jaouedi, N., Boujnah, N., & Bouhlel, M. S.: A new hybrid deep learning model for human action recognition. Journal of King Saud University - Computer and Information Sciences, 32(4), 447–453 (2020). https://doi.org/10.1016/j.jksuci.2019.09.004

Jaramillo, I. E., Chola, C., Jeong, J.-G., Oh, J.-H., Jung, H., Lee, J.-H., Lee, W. H., & Kim, T.-S.: Human Activity Prediction Based on Forecasted IMU Activity Signals by Sequence-to-Sequence Deep Neural Networks. Sensors, 23(14), 6491 (2023). https://doi.org/10.3390/s23146491

Joshi, S., Kumar, R., & Dwivedi, A.: Hybrid DSSCS and convolutional neural network for peripheral blood cell recognition system. IET Image Processing, 14(17), 4450-4460 (2020). https://doi.org/10.1049/ iet-ipr.2020.0370

Khan, I. U., Afzal, S., & Lee, J. W.: Human Activity Recognition via Hybrid Deep Learning Based Model. Sensors, 22(1), 323 (2022). https://doi.org/10.3390/s22010323

Kuehne, H., Arslan, A., & Serre, T.: The Language of Actions: Recovering the Syntax and Semantics of Goal-Directed Human Activities. 2014 IEEE Conference on Computer Vision and Pattern Recognition, 780–787 (2014). https://doi.org/10.1109/CVPR.2014.105

Kumar, Vaibhav, et al. "A Machine Learning Approach For Predicting Onset And Progression"“Towards Early Detection Of Chronic Diseases “." Journal of Pharmaceutical Negative Results (2022): 6195-6202.

Kumar, R., Qamar, I., Virdi, J. S., & Krishnan, N. C.: Multi-label Learning for Activity Recognition. 2015 International Conference on Intelligent Environments, 152–155 (2015). https://doi.org/10.1109/IE.2015.32

Lara, Ó. D., Pérez, A. J., Labrador, M. A., & Posada, J. D.: Centinela: A human activity recognition system based on acceleration and vital sign data. Pervasive and Mobile Computing, 8(5), 717–729 (2012). https://doi.org/10.1016/j.pmcj.2011.06.004

Li, J., Fong, S., Wong, R. K., Millham, R., & Wong, K. K. L.: Elitist Binary Wolf Search Algorithm for Heuristic Feature Selection in High-Dimensional Bioinformatics Datasets. Scientific Reports, 7(1), 4354 (2017). https://doi.org/10.1038/s41598-017-04037-5

Li, X., Zhang, Y., Zhang, J., Chen, S., Marsic, I., Farneth, R. A., & Burd, R. S.: Concurrent Activity Recognition with Multimodal CNN-LSTM Structure. arXiv: 1702.01638v1 (2017). https://doi.org/10.48550/arXiv.1702.01638

Li, Y., Yang, G., Su, Z., Li, S., & Wang, Y.: Human activity recognition based on multienvironment sensor data. Information Fusion, 91, 47–63 (2023). https://doi.org/10.1016/j.inffus.2022.10.015

Liu, A.-A., Xu, N., Nie, W.-Z., Su, Y.-T., & Zhang, Y.-D.: Multi-Domain and Multi-Task Learning for Human Action Recognition. IEEE Transactions on Image Processing, 28(2), 853–867 (2019). https://doi.org/10.1109/ TIP.2018.2872879

Liu, L., Shao, L., Li, X., & Lu, K.: Learning Spatio-Temporal Representations for Action Recognition: A Genetic Programming Approach. IEEE Transactions on Cybernetics, 46(1), 158–170 (2016). https://doi.org/10.1109/ TCYB.2015.2399172

Liu, T., Chen, Z., Liu, H., Zhang, Z., & Chen, Y.: Multi-modal hand gesture designing in multi-screen touchable teaching system for human-computer interaction. Proceedings of the 2nd International Conference on Advances in Image Processing, 198–202 (2018). https://doi.org/10.1145/3239576.3239619

Mall, P. K., et al. "A comprehensive review of deep neural networks for medical image processing: Recent developments and future opportunities. Healthcare Analytics, 4, 100216." (2023).

Mall, Pawan Kumar, et al. "Rank Based Two Stage Semi-Supervised Deep Learning Model for X-Ray Images Classification: AN APPROACH TOWARD TAGGING UNLABELED MEDICAL DATASET." Journal of Scientific & Industrial Research (JSIR) 82.08 (2023): 818-830.

Mall, Pawan Kumar, et al. "A comprehensive review of deep neural networks for medical image processing: Recent developments and future opportunities." Healthcare Analytics (2023): 100216.

Ma, S., Zhang, J., Sclaroff, S., Ikizler-Cinbis, N., & Sigal, L.: Space-Time Tree Ensemble for Action Recognition and Localization. International Journal of Computer Vision, 126(2–4), 314–332 (2018). https://doi.org/10.1007/ s11263-016-0980-8

Morshed, M. G., Sultana, T., Alam, A., & Lee, Y.-K.: Human Action Recognition: A Taxonomy-Based Survey, Updates, and Opportunities. Sensors, 23(4), 2182 (2023). https://doi.org/10.3390/s23042182

Münzner, S., Schmidt, P., Reiss, A., Hanselmann, M., Stiefelhagen, R., & Dürichen, R. (2017). CNN-based sensor fusion techniques for multimodal human activity recognition. Proceedings of the 2017 ACM International Symposium on Wearable Computers, 158–165. https://doi.org/10.1145/3123021.3123046

Muralikrishna, S. N., Muniyal, B., Acharya, U. D., & Holla, R.: Enhanced Human Action Recognition Using Fusion of Skeletal Joint Dynamics and Structural Features. Journal of Robotics, 2020, 1–16 (2020). https://doi.org/10.1155/2020/3096858

Najjar, N., & Gupta, S.: Better-than-the-best fusion algorithm with application in human activity recognition. SPIE Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications 2015, 949805-949810 (2015). https://doi.org/10.1117/12.2177123

Narayan, Vipul, et al. "A Comprehensive Review of Various Approach for Medical Image Segmentation and Disease Prediction." Wireless Personal Communications 132.3 (2023): 1819-1848.

Narayan, Vipul, et al. "Severity of Lumpy Disease detection based on Deep Learning Technique." 2023 International Conference on Disruptive Technologies (ICDT). IEEE, 202

Narayan, Vipul, et al. "7 Extracting business methodology: using artificial intelligence-based method." Semantic Intelligent Computing and Applications 16 (2023): 123.

Nguyen, D. T., Li, W., & Ogunbona, P. O.: Local intensity distribution descriptor for object detection. Electronics Letters, 47(5), 321 (2011). https://doi.org/10.1049/el.2010.3256

Nguyen, D. T., Ogunbona, P., & Li, W. Human detection with contour-based local motion binary patterns. 2011 18th IEEE International Conference on Image Processing, 3609–3612 (2011). https://doi.org/10.1109/ ICIP.2011.6116498

Nweke, H. F., Teh, Y. W., Mujtaba, G., & Al-garadi, M. A.: Data fusion and multiple classifier systems for human activity detection and health monitoring: Review and open research directions. Information Fusion, 46, 147–170 (2019). https://doi.org/10.1016/j.inffus.2018.06.002

Ordóñez, F., & Roggen, D.: Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition. Sensors, 16(1), 115 (2016). https://doi.org/10.3390/s16010115

Pai. M. M. M., Ganiga, R., Pai, R. M. & Sinha, R. K.: Standard Electronic Health Record (EHR) framework for Indian healthcare system. Health Services and Outcomes Research Methodology, 21, 339-362 (2021). https://doi.org/10.1007/s10742-020-00238-0

Pareek, P., & Thakkar, A.: RGB-D based human action recognition using evolutionary self-adaptive extreme learning machine with knowledge-based control parameters. Journal of Ambient Intelligence and Humanized Computing, 14(2), 939–957 (2023). https://doi.org/10.1007/s12652-021-03348-w

Patel, C. I., Garg, S., Zaveri, T., Banerjee, A., & Patel, R.: Human action recognition using fusion of features for unconstrained video sequences. Computers & Electrical Engineering, 70, 284–301 (2018). https://doi.org/ 10.1016/j.compeleceng.2016.06.004

Patel, C. I., Labana, D., Pandya, S., Modi, K., Ghayvat, H., & Awais, M.: Histogram of Oriented Gradient-Based Fusion of Features for Human Action Recognition in Action Video Sequences. Sensors, 20(24), 7299 (2020). https://doi.org/10.3390/s20247299

Peng, L., Chen, L., Wu, X., Guo, H., & Chen, G.: Hierarchical Complex Activity Representation and Recognition Using Topic Model and Classifier Level Fusion. IEEE Transactions on Biomedical Engineering, 64(6), 1369–1379 (2017). https://doi.org/10.1109/TBME.2016.2604856

Ponti Jr., M. P.: Combining Classifiers: From the Creation of Ensembles to the Decision Fusion. 2011 24th SIBGRAPI Conference on Graphics, Patterns, and Images Tutorials, 1–10 (2011). https://doi.org/10.1109/ SIBGRAPI-T.2011.9

Prakash Yadav, S., & Yadav, S.: Fusion of Medical Images in Wavelet Domain: A Hybrid Implementation. Computer Modeling in Engineering & Sciences, 122(1), 303–321 (2020). https://doi.org/10.32604/cmes. 2020.08459

Prakash Yadav, S., & Yadav, S.: Image Fusion using Hybrid Methods in Multimodality Medical Images. Medical & Biological Engineering & Computing, 58, 669-687 (2020). https://doi.org/10.1007/s11517-020-02136-6

Qiu, S., Zhao, H., Jiang, N., Wang, Z., Liu, L., An, Y., Zhao, H., Miao, X., Liu, R., & Fortino, G.: Multi-sensor information fusion based on machine learning for real applications in human activity recognition: State-of-the-art and research challenges. Information Fusion, 80, 241–265 (2022). https://doi.org/ 10.1016/j.inffus.2021.11.006

Rai, V., Gupta, G., Joshi, S., Kumar, R., & Dwivedi, A.: LSTM-based adaptive whale optimization model for classification of fused multimodality medical image. Signal, Image and Video Processing, 17(5), 2241–2250 (2023). https://doi.org/10.1007/s11760-022-02439-1

Ravanbakhsh, M., Nabi, M., Sangineto, E., Marcenaro, L., Regazzoni, C., & Sebe, N.: Abnormal event detection in videos using generative adversarial nets. 2017 IEEE International Conference on Image Processing (ICIP), 1577–1581 (2017). https://doi.org/10.1109/ICIP.2017.8296547

Roche, J., De-Silva, V., Hook, J., Moencks, M., & Kondoz, A.: A Multimodal Data Processing System for LiDAR-Based Human Activity Recognition. IEEE Transactions on Cybernetics, 52(10), 10027–10040 (2022). https://doi.org/10.1109/TCYB.2021.3085489

Rogova, G.: Combining the results of several neural network classifiers. Neural Networks, 7(5), 777–781 (1994). https://doi.org/10.1016/0893-6080(94)90099-X

Saxena, Aditya, et al. "Comparative Analysis Of AI Regression And Classification Models For Predicting House Damages İn Nepal: Proposed Architectures And Techniques." Journal of Pharmaceutical Negative Results (2022): 6203-6215.

Schrader, L., Vargas Toro, A., Konietzny, S., Rüping, S., Schäpers, B., Steinböck, M., Krewer, C., Müller, F., Güttler, J., & Bock, T.: Advanced Sensing and Human Activity Recognition in Early Intervention and Rehabilitation of Elderly People. Journal of Population Ageing, 13(2), 139–165 (2020). https://doi.org/ 10.1007/s12062-020-09260-z

Sebbak, F., & Benhammadi, F.: Majority-consensus fusion approach for elderly IoT-based healthcare applications. Annals of Telecommunications, 72(3–4), 157–171 (2017). https://doi.org/10.1007/s12243-016-0550-7

Sharaf, A., Torki, M., Hussein, M. E., & El-Saban, M.: Real-Time Multi-scale Action Detection from 3D Skeleton Data. 2015 IEEE Winter Conference on Applications of Computer Vision, 998–1005 (2015). https://doi.org/10.1109/WACV.2015.138

Sharif, M., Khan, M. A., Akram, T., Javed, M. Y., Saba, T., & Rehman, A.: A framework of human detection and action recognition based on uniform segmentation and combination of Euclidean distance and joint entropy-based features selection. EURASIP Journal on Image and Video Processing, 2017(1), 89 (2017). https://doi.org/10.1186/s13640-017-0236-8

Shen, C., Chen, Y., & Yang, G.: On motion-sensor behavior analysis for human-activity recognition via smartphones. 2016 IEEE International Conference on Identity, Security and Behavior Analysis (ISBA), 1–6 (2016). https://doi.org/10.1109/ISBA.2016.7477231

Song, S., Lan, C., Xing, J., Zeng, W., & Liu, J.: Spatio-Temporal Attention-Based LSTM Networks for 3D Action Recognition and Detection. IEEE Transactions on Image Processing, 27(7), 3459–3471 (2018). https://doi.org/10.1109/TIP.2018.2818328

Srivastava, N., Mansimov, E., & Salakhutdinov, R.: Unsupervised Learning of Video Representations using LSTMs. International Conference on Machine Learning, 843-852. PMLR (2015).

Tao, W., Chen, H., Moniruzzaman, M., Leu, M. C., Yi, Z., & Qin, R.: Attention-Based Sensor Fusion for Human Activity Recognition Using IMU Signals. Engineering Applications of Artificial Intelligence (2021). https://doi.org/10.48550/arXiv.2112.11224

Thakur, D., & Biswas, S.: Feature fusion using deep learning for smartphone based human activity recognition. International Journal of Information Technology, 13(4), 1615–1624 (2021). https://doi.org/10.1007/ s41870-021-00719-6

Uddin, M., & Lee, Y.-K.: Feature Fusion of Deep Spatial Features and Handcrafted Spatiotemporal Features for Human Action Recognition. Sensors, 19(7), 1599 (2019). https://doi.org/10.3390/s19071599

Valdovinos, R. M., & Sánchez, J. S.: Combining Multiple Classifiers with Dynamic Weighted Voting. International Conference on Hybrid Artificial Intelligence Systems, 510–516 (2009). https://doi.org/10.1007/ 978-3-642-02319-4_61

Vemulapalli, R., & Chellappa, R.: Rolling Rotations for Recognizing Human Actions from 3D Skeletal Data. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 4471–4479 (2016). https://doi.org/ 10.1109/CVPR.2016.484

Vidya, B., & Sasikumar, P.: Wearable multi-sensor data fusion approach for human activity recognition using machine learning algorithms. Sensors and Actuators A: Physical, 341, 113557 (2022). https://doi.org/10.1016/j.sna.2022.113557

Wang, D., Ouyang, W., Li, W., & Xu, D.: Dividing and Aggregating Network for Multi-view Action Recognition. European Conference on Computer Vision (ECCV). 457–473 (2018). https://doi.org/10.1007/ 978-3-030-01240-3_28

Wang, L., Ding, Z., Tao, Z., Liu, Y., & Fu, Y.: Generative Multi-View Human Action Recognition. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 6211–6220 (2019). https://doi.org/10.1109/ ICCV.2019.00631

Ward, J. A., Lukowicz, P., Troster, G., & Starner, T. E.: Activity Recognition of Assembly Tasks Using Body-Worn Microphones and Accelerometers. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(10), 1553–1567 (2006). https://doi.org/10.1109/TPAMI.2006.197

Woźniak, M., Graña, M., & Corchado, E.: A survey of multiple classifier systems as hybrid systems. Information Fusion, 16, 3–17 (2014). https://doi.org/10.1016/j.inffus.2013.04.006

Wu, H., Siegel, M., Stiefelhagen, R., & Yang, J.: Sensor Fusion Using Dempster-Shafer Theory. IEEE Instrumentation and Measurement Technology Conference (2002).

Wu, Z., Li, X., Zhao, X., & Liu, Y.: Hybrid generative-discriminative recognition of human action in 3D joint space. Proceedings of the 20th ACM International Conference on Multimedia, 1081–1084 (2012). https://doi.org/ 10.1145/2393347.2396388

Xia, K., Huang, J., & Wang, H.: LSTM-CNN Architecture for Human Activity Recognition. IEEE Access, 8, 56855–56866 (2020). https://doi.org/10.1109/ACCESS.2020.2982225

Xiao, Q., & Song, R.: Action recognition based on hierarchical dynamic Bayesian network. Multimedia Tools and Applications, 77(6), 6955–6968 (2018). https://doi.org/10.1007/s11042-017-4614-0

Xu, H., Liu, J., Hu, H., & Zhang, Y.: Wearable Sensor-Based Human Activity Recognition Method with Multi-Features Extracted from Hilbert-Huang Transform. Sensors, 16(12), 2048 (2016). https://doi.org/ 10.3390/s16122048

Zhang, C., Cao, K., Lu, L., & Deng, T.: A multi-scale feature extraction fusion model for human activity recognition. Scientific Reports, 12(1), 20620 (2022). https://doi.org/10.1038/s41598-022-24887-y

Zhang, Y., Li, X., & Marsic, I.: Multi-Label Activity Recognition using Activity-specific Features and Activity Correlations. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 14620–14630 (2021). https://doi.org/10.1109/CVPR46437.2021.01439

Zhu, J., San-Segundo, R., & Pardo, J. M.: Feature extraction for robust physical activity recognition. Human-Centric Computing and Information Sciences, 7(1), 16 (2017). https://doi.org/10.1186/s13673-017-0097-2

Downloads

Published

24.03.2024

How to Cite

Gill, S. K. ., & Sharma, A. . (2024). A Systematic Review of various Fusion Techniques for Human Activity Recognition. International Journal of Intelligent Systems and Applications in Engineering, 12(20s), 230–252. Retrieved from https://ijisae.org/index.php/IJISAE/article/view/5135

Issue

Section

Research Article