Systematic Study of NLP Learning Models and Performance Evaluation

Authors

  • Jennifer D. Department of Computer Science and Engineering, Panimalar Engineering College, Chennai, India.
  • Valarmathi K. Department of Computer Science and Engineering, Panimalar Engineering College, Chennai, India.
  • Murali E. Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India.
  • Devi R. Department of Computer Science and Engineering, KCG College of Technology, Chennai, India.
  • Sathiya V. Department of Computer Science and Engineering, Panimalar Engineering College, Chennai, India.

Keywords:

BERT, Sentence classification, Sentiment analysis, NLP Learning

Abstract

Latest research advancements in the field of deep learning have significantly elevated natural language processing like sentiment analysis, speech recognition, text classification and Named Entity Recognition. NLP task like sentence classification involves categorizing sentences into predefined classes based on their content. Sentiment analysis, also known as opinion mining, employs NLP and machine learning to identify sentiment in text (positive, negative, or neutral) for understanding opinions and emotions. This paper offers a comprehensive exploration of advanced sentiment analysis approaches employing BERT. Bidirectional Encoder Representations from Transformers (BERT) excels at capturing contextual word relationships, making it suitable for sentiment analysis. The study encompasses Deep Learning as well as Machine Learning approaches, analyzing 40 research papers. Out of these, 21 utilize BERT for text classification, while others employ general ML techniques. The paper compares BERT with other language models, investigates into proprietary BERT-based models, and outlines challenges and research gaps in sentiment analysis.

Downloads

Download data is not yet available.

References

Chiorrini, A., Diamantini, C., Mircoli, A., & Potena, D. (2021). Emotion and sentiment analysis of tweets using BERT. EDBT/ICDTWorkshops.

Koroteev, Mikhail. (2021). BERT: A Review of Applications in Natural Language Processing and Understanding. 10.48550/arXiv.2103.11943.

M. Preetha, N. Anil Kumar, K. Elavarasi, T. Vignesh, V. Nagaraju “A Hybrid Clustering Approach Based Q-Leach in TDMA to Optimize QOS-Parameters”, Journal of Wireless Personal Communications Vol.123,Issue2, pages 1169–1200 (2022): 2 October 2021. ISSN No. 0929-6212 DOI:10.1007/s11277-021-09175-8

Munikar, M., Shakya, S., & Shrestha, A. (2019). Fine-grained sentiment classification using BERT. In arXiv [cs.CL]. http://arxiv.org/abs/1910.03474.

M. Yasen and S. Tedmori, "Movies Reviews Sentiment Analysis and Classification," 2019 IEEE Jordan International Joint Conference on Electrical Engineering and Information Technology (JEEIT), Amman, Jordan, 2019, pp. 860-865, doi: 10.1109/JEEIT.2019.8717422.

A.M. Rajeswari, M. Mahalakshmi, R. Nithyashree and G. Nalini, "Sentiment Analysis for Predicting Customer Reviews using a Hybrid Approach," 2020 Advanced Computing and Communication Technologies for High Performance Applications (ACCTHPA), Cochin, India, 2020, pp. 200-205, doi: 10.1109/ACCTHPA49271.2020.9213236.

Shrestha, N., & Nasoz, F. (2019). Deep learning sentiment analysis of Amazon.com reviews and ratings. International Journal on Soft Computing Artificial Intelligence and Applications, 8(1), 01–15. https://doi.org/10.5121/ijscai.2019.8101

M. Preetha, Raja Rao Budaraju, Jackulin. C, P. S. G. Aruna Sri, T. Padmapriya “Deep Learning-Driven Real-Time Multimodal Healthcare Data Synthesis”, International Journal of Intelligent Systems and Applications in Engineering (IJISAE), ISSN:2147-6799, Vol.12, Issue 5, page No:360-369, 2024.

T. U. Haque, N. N. Saber and F. M. Shah, "Sentiment analysis on large scale Amazon product reviews," 2018 IEEE International Conference on Innovative Research and Development (ICIRD), Bangkok, Thailand, 2018, pp. 1-6, doi: 10.1109/ICIRD.2018.8376299.

Geetha, M. P., & Karthika Renuka, D. (2021). Improving the performance of aspect based sentiment analysis using fine-tuned Bert Base Uncased model. International Journal of Intelligent Networks, 2, 64–69.

Z. A. Guven, "Comparison of BERT Models and Machine Learning Methods for Sentiment Analysis on Turkish Tweets," 2021 6th International Conference on Computer Science and Engineering (UBMK), Ankara, Turkey, 2021, pp. 98-101, doi: 10.1109/UBMK52708.2021.9559014.

Santhosh Kumar, B., Geetha, M. P., Padmapriya, G., & Premkumar, M. (2020). An approach for improving the labelling in a text corpora using sentiment analysis. Advances in Mathematics: Scientific Journal, 9(10), 8165–8174. https://doi.org/10.37418/amsj.9.10.46.

Xu, H., Liu, B., Shu, L., & Yu, P. S. (2019). BERT post-training for Review Reading Comprehension and aspect-based sentiment analysis. In arXiv [cs.CL]. http://arxiv.org/abs/1904.02232.

Sun, C., Huang, L., & Qiu, X. (2019). Utilizing BERT for aspect-based sentiment analysis via constructing auxiliary sentence. In arXiv [cs.CL]. http://arxiv.org/abs/1903.09588.

S. Yu, J. Su and D. Luo, "Improving BERT-Based Text Classification With Auxiliary Sentence and Domain Knowledge," in IEEE Access, vol. 7, pp. 176600-176612, 2019, doi: 10.1109/ACCESS.2019.2953990.

Sun, C., Qiu, X., Xu, Y., & Huang, X. (2019). How to fine-tune BERT for text classification? In arXiv [cs.CL]. http://arxiv.org/abs/1905.05583.

Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., & Soricut, R. (2019). ALBERT: A lite BERT for self-supervised learning of language representations. In arXiv [cs.CL]. http://arxiv.org/abs/1909.11942.

Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., & Stoyanov, V. (2019). RoBERTa: A robustly optimized BERT pretraining approach. In arXiv [cs.CL]. http://arxiv.org/abs/1907.11692.

Ambalavanan, A. K., & Devarakonda, M. V. (2020). Using the contextual language model BERT for multi-criteria classification of scientific articles. Journal of Biomedical Informatics, 112(103578), 103578. https://doi.org/10.1016/j.jbi.2020.103578.

Singh, M., Jakhar, A. K., & Pandey, S. (2021). Sentiment analysis on the impact of coronavirus in social life using the BERT model. Social Network Analysis and Mining, 11(1), 1–11. https://doi.org/10.1007/s13278-021-00737-z.

You, Y., Li, J., Reddi, S., Hseu, J., Kumar, S., Bhojanapalli, S., Song, X., Demmel, J., Keutzer, K., & Hsieh, C.-J. (2019). Large Batch Optimization for Deep Learning: Training BERT in 76 minutes. In arXiv [cs.LG]. http://arxiv.org/abs/1904.00962.

Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2018). BERT: Pre-training of deep bidirectional Transformers for language understanding. In arXiv [cs.CL]. http://arxiv.org/abs/1810.04805.

J. He and H. Hu, "MF-BERT: Multimodal Fusion in Pre-Trained BERT for Sentiment Analysis," in IEEE Signal Processing Letters, vol. 29, pp. 454-458, 2022, doi: 10.1109/LSP.2021.3139856.

Yenduri, G., Rajakumar, B. R., Praghash, K., & Binu, D. (2021). Heuristic-assisted BERT for Twitter Sentiment Analysis. International Journal of Computational Intelligence and Applications, 20(03). https://doi.org/10.1142/s1469026821500152.

Z. Gao, A. Feng, X. Song and X. Wu, "Target-Dependent Sentiment Classification With BERT," in IEEE Access, vol. 7, pp. 154290-154299, 2019, doi: 10.1109/ACCESS.2019.2946594.

Sun, Z., Sarma, P., Sethares, W., & Liang, Y. (2019). Learning relationships between text, audio, and video via deep canonical correlation for multimodal language analysis. In arXiv [cs.LG]. http://arxiv.org/abs/1911.05544.

Li, M., Li, W., Wang, F., Jia, X., & Rui, G. (2021). Applying BERT to analyze investor sentiment in stock market. Neural Computing & Applications, 33(10), 4663–4676. https://doi.org/10.1007/s00521-020-05411-7.

Abdirahman, A. A., Hashi, A. O., Dahir, U. M., Elmi, M. A., & Rodriguez, O. E. R. (2023). Enhancing natural language processing in Somali text classification: A comprehensive framework for stop word removal. International Journal of Engineering Trends and Technology, 71(12), 40–49. https://doi.org/10.14445/22315381/ijett-v71i12p205.

Syed Tanzeel Rabani, Maheswaran K, “Software Cognitive Complexity Metrics for OO Design: A Survey”, International Journal of Scientific Research in Science, Engineering and Technology, Vol. 3 , No. 3, pp. 691-698, June 2017

Maheswaran K, Aloysius A, “An Interface based Cognitive Weighted Class Complexity Measure for Object Oriented Design”, International Journal of Pure and Applied Mathematics, Vol. 118, No. 18, pp. 2771-2778, 2018.

Downloads

Published

24.03.2024

How to Cite

D., J. ., K., V. ., E., M. ., R., D. ., & V., S. . (2024). Systematic Study of NLP Learning Models and Performance Evaluation. International Journal of Intelligent Systems and Applications in Engineering, 12(20s), 773–780. Retrieved from https://ijisae.org/index.php/IJISAE/article/view/5275

Issue

Section

Research Article