Artificial Intelligence Based Sign Language Prediction by Using the Twin Delayed Deep Reinforcement Memory Network architecture

Authors

Keywords:

Sign language recognition, Artificial intelligence, Fibonacci ripple Chebyshev filter, Linear embedding Hessian component analysis, statistical looper wing butterfly optimization algorithm, Twin Delayed Deep reinforcement memory network

Abstract

Communication between the hearing- and speech-impaired and the rest of society may be greatly aided by developments in Sign language recognition (SLR). One of the most important building blocks of sign language comprehension is word-level sign language recognition (WSLR). However, due to the fact that the meaning of a word relies on a wide range of subtle body gestures, hand configurations, and other behaviors, identifying signals in films may be difficult. Recent pose-based WSLR designs either represent the temporal information without completely utilizing the spatial information or explain the spatial but not the temporal correlations among the postures in various frames. To address the problem of WSLR, we use a novel approach based on AI to collect posture data and carry out recognition. To begin, we pulled the data and ran it through a Fibonacci ripple Chebyshev filter for preliminary cleaning (FRCF). Linear embedding Hessian component analysis (LEHCA) is then used to extract the features. The statistical looper wing butterfly optimization (SLWBO) approach is then used to segment the regions of interest. In the end, Twin Delayed Deep Reinforcement Memory Network (TDDRMN) architecture explicitly analyzes the feature interactions and recognizes the meaning of sign language to aid in the decision making process. The results derived on WLASL, a typical word-level sign language recognition dataset, show the system's superiority over the traditional approaches by obtaining high accurate prediction with minimum time in decision making.

Downloads

Download data is not yet available.

References

Z. Zeng and F. Wang, "An Attention Based Chinese Sign Language Recognition Method Using sEMG Signal," in 2022 12th International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER), 2022, pp. 457-461.

A. Mino, M. Popa, and A. Briassouli, "The Effect of Spatial and Temporal Occlusion on Word Level Sign Language Recognition," in 2022 IEEE International Conference on Image Processing (ICIP), 2022, pp. 2686-2690.

A. Elboushaki, R. Hannane, K. Afdel, and L. Koutti, "MultiD-CNN: A multi-dimensional feature learning approach based on deep convolutional networks for gesture recognition in RGB-D image sequences," Expert Systems with Applications, vol. 139, p. 112829, 2020.

K. M. Sagayam, A. D. Andrushia, A. Ghosh, O. Deperlioglu, and A. A. Elngar, "Recognition of hand gesture image using deep convolutional neural network," International Journal of Image and Graphics, vol. 22, p. 2140008, 2022.

A. Dadashzadeh, A. T. Targhi, M. Tahmasbi, and M. Mirmehdi, "HGR‐Net: a fusion network for hand gesture segmentation and recognition," IET Computer Vision, vol. 13, pp. 700-707, 2019.

Y. Liao, P. Xiong, W. Min, W. Min, and J. Lu, "Dynamic sign language recognition based on video sequence with BLSTM-3D residual networks," IEEE Access, vol. 7, pp. 38044-38054, 2019.

X. Ji, Q. Zhao, J. Cheng, and C. Ma, "Exploiting spatio-temporal representation for 3D human action recognition from depth map sequences," Knowledge-Based Systems, vol. 227, p. 107040, 2021.

P. Wang, W. Li, Z. Gao, C. Tang, and P. O. Ogunbona, "Depth pooling based large-scale 3-d action recognition with convolutional neural networks," IEEE Transactions on Multimedia, vol. 20, pp. 1051-1061, 2018.

L. Meng and R. Li, "An attention-enhanced multi-scale and dual sign language recognition network based on a graph convolution network," Sensors, vol. 21, p. 1120, 2021.

V. A. Shanthakumar, C. Peng, J. Hansberger, L. Cao, S. Meacham, and V. Blakely, "Design and evaluation of a hand gesture recognition approach for real-time interactions," Multimedia Tools and Applications, vol. 79, pp. 17707-17730, 2020.

R. Rastgoo, K. Kiani, and S. Escalera, "Sign language recognition: A deep survey," Expert Systems with Applications, vol. 164, p. 113794, 2021.

R. Rastgoo, K. Kiani, and S. Escalera, "Multi-modal deep hand sign language recognition in still images using restricted Boltzmann machine," Entropy, vol. 20, p. 809, 2018.

R. Rastgoo, K. Kiani, and S. Escalera, "Video-based isolated hand sign language recognition using a deep cascaded model," Multimedia Tools and Applications, vol. 79, pp. 22965-22987, 2020.

S. Katoch, V. Singh, and U. S. Tiwary, "Indian Sign Language recognition system using SURF with SVM and CNN," Array, vol. 14, p. 100141, 2022.

M. Kumar, P. Gupta, R. K. Jha, A. Bhatia, K. Jha, and B. K. Shah, "Sign Language Alphabet Recognition Using Convolution Neural Network," in 2021 5th International Conference on Intelligent Computing and Control Systems (ICICCS), 2021, pp. 1859-1865.

C.-H. Chuan, E. Regina, and C. Guardino, "American sign language recognition using leap motion sensor," in 2014 13th International Conference on Machine Learning and Applications, 2014, pp. 541-544.

P. Kirubanantham and G. Vijayakumar, "Novel recommendation system based on long‐term composition for adaptive web services," Computational Intelligence, vol. 36, pp. 1063-1077, 2020.

P. Kirubanantham, S. Sankar, C. Amuthadevi, M. Baskar, M. Senthil Raja, and P. Karthik, "An intelligent web service group-based recommendation system for long-term composition," The Journal of Supercomputing, vol. 78, pp. 1944-1960, 2022.

P. Kirubanantham, A. Saranya, and D. S. Kumar, "Convolutional Recommended Neural Network system based on user reviews for movies," in 2021 4th International Conference on Computing and Communications Technologies (ICCCT), 2021, pp. 17-21.

P. Kirubanantham, A. Saranya, and D. S. Kumar, "Credit Sanction Forecasting," in 2021 4th International Conference on Computing and Communications Technologies (ICCCT), 2021, pp. 155-159.

B. Khelil, H. Amiri, T. Chen, F. Kammüller, I. Nemli, and C. Probst, "Hand gesture recognition using leap motion controller for recognition of arabic sign language," in 3rd International conference ACECS, 2016, pp. 233-238.

T.-W. Chong and B.-G. Lee, "American sign language recognition using leap motion controller with machine learning approach," Sensors, vol. 18, p. 3554, 2018.

W. Zehra, A. R. Javed, Z. Jalil, H. U. Khan, and T. R. Gadekallu, "Cross corpus multi-lingual speech emotion recognition using ensemble learning," Complex & Intelligent Systems, vol. 7, pp. 1845-1854, 2021.

Y. Du, S. Liu, L. Feng, M. Chen, and J. Wu, "Hand gesture recognition with leap motion," arXiv preprint arXiv:1711.04293, 2017.

D. Avola, M. Bernardi, L. Cinque, G. L. Foresti, and C. Massaroni, "Exploiting recurrent neural networks and leap motion controller for the recognition of sign language and semaphoric hand gestures," IEEE Transactions on Multimedia, vol. 21, pp. 234-245, 2018.

Different kinds of signs

Downloads

Published

17.02.2023

How to Cite

M. Karthick, G. ., Kirubanantham, P., Saranya, A. ., & Sayeekumar, M. (2023). Artificial Intelligence Based Sign Language Prediction by Using the Twin Delayed Deep Reinforcement Memory Network architecture. International Journal of Intelligent Systems and Applications in Engineering, 11(2), 200–211. Retrieved from https://ijisae.org/index.php/IJISAE/article/view/2611

Issue

Section

Research Article