Visual Storytelling: Text-to-Video Conversion from Bibliometric Perspectives
Keywords:
Text2Video, Education, Research, Bibliometric, Precision, adaptive storytelling.Abstract
Building upon previous research on Text2Image, Text2Presentation, and Text2Video, this study adds value by critically evaluating a wide range of current methodologies, including different generative models, critical analysis of various approaches, highlighting the shortcomings, and suggesting and directing future research. Text-to-video generation is a cutting-edge and quickly developing field of artificial intelligence that uses computer vision and natural language processing to generate video material from textual descriptions.A popular and rigorous method for searching and assessing large volumes of scientific material is bibliometric analysis. As a result, we aim to provide a comprehensive overview of the bibliometric approach, emphasizing its various methodologies, along with detailed instructions that can be trusted to ensure that bibliometric analysis is carried out rigorously and confidently. To this end, we elucidate when and where to apply bibliometric analysis in relation to related techniques such as systematic literature reviews and meta-analyses. This work presents the first bibliometric analysis of text-to-video production, filling a significant gap in the literature by providing a comprehensive overview of major themes, prominent researchers, and developing areas within this rapidly evolving field. Taken as a whole, this research should be a useful tool for comprehending the many methods and approaches that can be used to conduct bibliometric analysis research.
Downloads
References
K. Paramita and M. L. Khodra, "Tailored summary for automatic poster generator," in 2016 International Conference on Advanced Informatics: Concepts, Theory and Application (ICAICTA), 2016, pp. 1–6.
J. Oppenlaender, "The creativity of text-to-image generation," in Proc. 25th Int. Acad. Mindtrek Conf., 2022, pp. 192–202.
C. Xu, R. Wang, S. Lin, X. Luo, B. Zhao, L. Shao, and M. Hu, "Lecture2note: Automatic generation of lecture notes from slide-based educational videos," in 2019 IEEE Int. Conf. Multimedia Expo (ICME), 2019, pp. 898–903.
Y. Li, M. Min, D. Shen, D. Carlson, and L. Carin, "Video generation from text," in Proc. AAAI Conf. Artif. Intell., vol. 32, 2018.
Y. Yu, Z. Tu, L. Lu, X. Chen, H. Zhan, and Z. Sun, "Text2Video: Automatic video generation based on text scripts," in Proc. 29th ACM Int. Conf. Multimedia, 2021, pp. 2753–2755.
R. K. Chauhan, G. Kalia, and A. Mishra, "Video editing: Technical vs. creative tool," ECS Trans., vol. 107, no. 1, p. 10045, 2022.
G. Kaur, A. Kaur, M. Khurana, et al., "A stem to stern sentiment analysis emotion detection," in 2022 10th Int. Conf. Reliability, Infocom Technol. Optimization (Trends Future Directions) (ICRITO), 2022, pp. 1–5.
G. Kaur, A. Kaur, and M. Khurana, "A survey of computational techniques for automated video creation and their evaluation," in 2024 11th Int. Conf. Reliability, Infocom Technol. Optimization (Trends Future Directions) (ICRITO), 2024, pp. 1–6.
A. S. Rusydiana, "Bibliometric analysis of journals, authors, and topics related to covid-19 and islamic finance listed in the dimensions database by biblioshiny," Sci. Edit., vol. 8, no. 1, pp. 72–78, 2021.
H. Ejaz, H. M. Zeeshan, F. Ahmad, S. N. A. Bukhari, N. Anwar, A. Alanazi, A. Sadiq, K. Junaid, M. Atif, K. O. A. Abosalif, et al., "Bibliometric analysis of publications on the omicron variant from 2020 to 2022 in the scopus database using r and vosviewer," Int. J. Environ. Res. Public Health, vol. 19, no. 19, p. 12407, 2022.
J. Wang, X. Li, P. Wang, and Q. Liu, "Bibliometric analysis of digital twin literature: A review of influencing factors and conceptual structure," Technol. Anal. Strateg. Manage., vol. 36, no. 1, pp. 166–180, 2024.
Y. Yu, Y. Li, Z. Zhang, Z. Gu, H. Zhong, Q. Zha, L. Yang, C. Zhu, and E. Chen, "A bibliometric analysis using vosviewer of publications on covid-19," Ann. Transl. Med., vol. 8, no. 13, 2020.
S. Harnal, G. Sharma, S. Malik, G. Kaur, S. Khurana, P. Kaur, S. Simaiya, and D. Bagga, "Bibliometric mapping of trends, applications and challenges of artificial intelligence in smart cities," EAI Endorsed Trans. Scalable Inf. Syst., vol. 9, no. 4, p. 8–8, 2022.
S. R. Nallola and V. Ayyasamy, "Analyzing real-time surveillance video analytics: A comprehensive bibliometric study," Int. J. Intell. Syst. Appl. Eng., vol. 11, no. 3, pp. 345–356, 2023.
W. Scheibel, D. Limberger, and J. Döllner, "Survey of treemap layout algorithms," in Proc. 13th Int. Symp. Visual Inf. Commun. Interact., 2020, pp. 1–9.
G. Kaur, "Bibliometric mapping of theme and trends of blockchain," in 2024 11th Int. Conf. Reliability, Infocom Technol. Optimization (Trends Future Directions) (ICRITO), 2024, pp. 1–6.
O. Sharma, S. Mohapatra, J. Mohanty, P. Dhiman, and A. Bonkra, "Predicting agriculture leaf diseases (potato): An automated approach using hyper-parameter tuning and deep learning," in 2023 Third Int. Conf. Secure Cyber Comput. Commun. (ICSCCC), 2023, pp. 490–493.
J. Perez-Martin, B. Bustos, S. J. F. Guimaraes, I. Sipiran, J. Pérez, and G. C. Said, "A comprehensive review of the video-to-text problem," Artif. Intell. Rev., pp. 1–75, 2022.
M. A. Rojas-Sánchez, P. R. Palos-Sánchez, and J. A. Folgado-Fernández, "Systematic literature review and bibliometric analysis on virtual reality and education," Educ. Inf. Technol., vol. 28, no. 1, pp. 155–192, 2023.
P. Kaur, A. Kaur, M. Khurana, and R. Damaševičius, "Multimodal hinglish tweet dataset for deep pragmatic analysis," Data, vol. 9, no. 2, p. 38, 2024.
A. Kaur and M. Khurana, "Multimodal Sentiments: Unraveling Text and Emoji Dynamics Through Deep Learning," in 2024 11th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO), pp. 1-6, March 2024.
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.