Self-Organizing Maps-based Graph Convolutional Summarizer -based Multi-Model Approaches for Document Summarization
Keywords:
Deep learning, Document summarization, Automatic document summarizing, Self-Organizing Maps-based Graph Convolutional Summarizer (SOM-GCS), Multi-document summarizingAbstract
A brief overview of the same subject summarizes a lengthy text or paper. Most of the paper's crucial material must be retained while superfluous verbosity is eliminated. In instruction to produce a succinct summary for a document summarizing, the system collects keywords from papers or multiple documents. The basic idea is to limit or cut back on the quantity of crucial information in any given text. An information processing system that, given a collection of documents, extracts the essential information from the source while keeping the user or task in mind, then presents the summary in well-formed and concise prose and performs the assignment of a document summarizing. Summarizing numerous documents as opposed to only one is called multi-document summarization. The two main kinds are extractive and abstractive summaries of several materials. The most important and notable phrases and words from the original text are used to construct extractive resumes. However, some terms and sentences may not be found in the original text. This article focuses on ATS (Automatic document summarizing) methods that have recently been introduced. Deep learning-based models have recently been used for multi-document summarizing, which encourages the growth of text summarization and enhances model performance. We suggest the Self-Organizing Maps-based Graph Convolutional Summarizer (SOM-GCS). This extractive multi-document summary method uses SOM to ensure minimum performance constraints as an alternative to standard approaches. It fixes SMO-GCS's flaws and adds improvements that lead to a summarizer that enables phrase embedding and feature learning that is conscious of the graph structure. A rigorous methodology is needed to demonstrate how improvements are possible while still guaranteeing a minimal performance restriction. The effectiveness of the suggested summarizing approach is assessed using the DUC 2004 and Daily Mail/CNN datasets. The experimental findings show that SOM-GCS performs comparably to state-of-the-art summarization methods regarding ROUGE scores.
Downloads
References
Ahmed Oussous, Fatima-Zahra Benjelloun, Ayoub Ait Lahcen, and Samir Belfkih. 2018. Big Data Technologies: A Survey. Journal of King Saud University-Computer and Information Sciences 30, 4, 431–448.
Gangathimmappa, M, Subramani, N, Sambath, V, Ramanujam, RAM, Sammeta, M. Deep learning enabled cross-lingual search with metaheuristic web-based query optimization model for multi-document summarization. Concurrency Computation Practice Experience. 2022;e7476. doi:10.1002/cpe.7476
Abbas Mardani, Sudhanshu Maurya & N. Arulkumar (2023) Eagle Strategy Arithmetic Optimisation Algorithm with Optimal Deep Convolutional Forest Based FinTech Application for Hyper-automation, Enterprise Information Systems, DOI: 10.1080/17517575.2023.2188123
Ghadimi, A., & Beigy, H. (2020). Deep submodular network: An application to multi-document summarization. Expert Systems with Applications, 152, 113392.
Shanmugavadivel, K., Sathishkumar, V.E., Neelakandan.S. et al. Deep learning based sentiment analysis and offensive language identification on multilingual code-mixed data. Sci Rep 12, 21557 (2022). https://doi.org/10.1038/s41598-022-26092-3
Widyassari PA, Afandy NE, Fanani AZ, Syukur A, Basuki RS (2019) Literature review of automatic text summarization: research trend, dataset and method. In: IEEE International conference on information and communications technology (ICOIACT), pp 491–496
Ranabhat R, Upreti A, Sangpang B, Manandhar S (2019) Salient sentence extraction of Nepali online health news texts. Int J Adv Soc Sci, pp 21–26.
Moratanch N, Chitrakala S (2017) A survey on extractive text summarization. In: 2017 International conference on computer, communication and signal processing (ICCCSP), pp 1–6
Kumar, Y., Kaur, K., & Kaur, S. (2021). Study of automatic text summarization approaches in different languages. Artificial Intelligence Review, 54(8), 5897-5929.
Reinald Kim Amplayo and Mirella Lapata. 2019. Informative and Controllable Opinion Summarization. arXiv preprint arXiv:1909.02322
Abhishek Kumar Singh, Manish Gupta, and Vasudeva Varma. 2018. Unity in Diversity: Learning Distributed Heterogeneous Sentence Representation for Extractive Summarization. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI 2018). New Orleans, United States, 5473–5480.
Logan Lebanoff, Kaiqiang Song, Franck Dernoncourt, Doo Soon Kim, Seokhwan Kim, Walter Chang, and Fei Liu. 2019. Scoring Sentence Singletons and Pairs for Abstractive Summarization. In Proceedings of the 57th Conference of the Association for Computational Linguistics (ACL 2019). Florence, Italy, 2175–2189.
Bahloul B, Aliane H, Benmohammed M (2019) ArA*summarizer: An Arabic text summarization system based on subtopic segmentation and using an A* algorithm for reduction. Wiley Expert systems, New York, pp 1–16.
S. V. Mokhale and G. M. Dhopawkar, "A Study on Different Multi-Document Summarization Techniques," 2019 Third International Conference on Inventive Systems and Control (ICISC), 2019, pp. 710-713, doi: 10.1109/ICISC44355.2019.9036387.
M. Y. Saeed, M. Awais, R. Talib and M. Younas, "Unstructured Text Documents Summarization With Multi-StageClustering," in IEEE Access, vol. 8, pp. 212838-212854, 2020, doi: 10.1109/ACCESS.2020.3040506.
S. U. T. L. ENCODER, “Deep learning based abstractive Arabic text summarization using two layers encoder and one layer decoder,” Journal of ?eoretical and Applied Information Technology, vol. 98, no. 16, 2020
Ukan, T., & Karc, A. (2020). Extractive multi-document text summarization based on graph independent sets. Egyptian Informatics Journal. doi:10.1016/j.eij.2019. 12.002.
Hanqi Jin, Tianming Wang, and Xiaojun Wan. 2020. Multi-Granularity Interaction Network for Extractive and Abstractive Multi-Document Summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL 2020). Online, 6244–6254.
Piji Li, Wai Lam, Lidong Bing, Weiwei Guo, and Hang Li. 2017. Cascaded Attention based Unsupervised Information Distillation for Compressive Summarization. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017). Copenhagen, Denmark, 2081–2090.
Ziqiang Cao, Wenjie Li, Sujian Li, and Furu Wei. 2017. Improving Multi-document Summarization via Text Classification. In Proceedings of the 31st AAAI Conference on Artificial Intelligence (AAAI 2017). San Francisco, United States, 3053–3059.
C. Zhuang, Y. Zheng, W. Huang, and W. Jia, “Joint Fine-Grained Components Continuously Enhance Chinese Word Embeddings,” IEEE Access, vol. 7, pp. 174699 174708, 2019, doi: 10.1109/ACCESS.2019.2956822
D. Zhao, J. Wang, Y. Chu, Y, Zhang, Z. Yang, and H. Lin, “Improving biomedical word representation with locally linear embedding,”Neurocomputing, vol. 447, pp. 172-182, Aug. 2021, doi:10.1016/j.neucom.2021.02.071.
E. F. Ayetiran, P. Sojka, and V. Novonty, “EDS-MEMBED:Mulyi-sense embeddings based on enhanced distributional semantic structures via a graph walk over word senses,”Knowledge-Based Systems, vol. 219, pp. 106902, May 2021, doi:10.1016/j.knosys.2021.106902.
C. Mallick, A. K. das, W. Ding, and J. Nayak, “Ensemble summarization of bio-medical articles integrating clustering and multi-objective evolutionary algorithms,” Applied Soft Computing, vol. 106, p. 107347, Jul. 2021, doi: 10.1016/j.asoc.2021.107347.
Alguliyev, RM, Aliguliyev, RM, Isazade, NR, Abdi, A & Idris, N 2019, 'COSUM: Text summarization based on clustering and optimization', Expert Systems, vol. 36, no. 1.
Rautray, R., & Balabantaray, R. C. (2017). Cat swarm optimization based evolutionary framework for multi document summarization. Physica a: statistical mechanics and its applications, 477, 174-186.
Yin, H. (2008). The self-organizing maps: background, theories, extensions and applications. Computational intelligence: A compendium, 715-762.
D. Paulraj, P. Ezhumalai & M. Prakash (2022) A Deep Learning Modified Neural Network(DLMNN) based proficient sentiment analysis technique on Twitter data, Journal of Experimental & Theoretical Artificial Intelligence, DOI: 10.1080/0952813X.2022.2093405
Arun, A., Bhukya, R. R., Hardas, B. M., Ch., T. et al. (2022). An Automated Word Embedding with Parameter Tuned Model for Web Crawling. Intelligent Automation & Soft Computing, 32(3), 1617–1632.
Downloads
Published
How to Cite
Issue
Section
License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.