Performance Evaluation of Proposed Deep Ensemble Method Algorithms in Distributed and Traditional Computing Environments for Structured Data Analysis

Authors

  • M. Bhargavi Krishna, S. Jyothi, P. Bhargavi

Keywords:

Distributed Computing, Bigdata, Map Reduce, Deep Ensemble algorithm, Traditional Computing.

Abstract

Storing big data is a complex because of the large amount, many types, and high speed at which the data is generated. This data may be imbalanced distributed data because it contains structured and unstructured data. To classify these data, it requires multiple technologies and methodologies to ensure efficient processing and retrieval of the data like Map Reduce model which is introduced in big data analysis. MapReduce is a parallel processing technique used to process data in a distributed manner. Specifically, it simplifies concurrent processing by partitioning large amounts of data into smaller segments and executing them simultaneously on a big data platform. However, it lacks efficiency compared to other forms of parallel processing and may exhibit slowness when performing specific operations. Additionally, it is not optimised for real-time processing, also it is unsuitable for applications that demand minimal latency. In the present era, distributed computing has emerged as a viable and increasingly popular choice for numerous applications of complex data. This is mostly owing to the technological advancements in computers, networks, mobile devices, and wireless communication technologies, which are becoming widely utilised in day to day lives. In this paper, proposed a Deep Ensemble method such as Deep Learning without Tuning, Deep Ensemble with boosting and Performance Tuning are applied to classifies the structured data in both distributed computing and traditional computing environments.

 

Downloads

Download data is not yet available.

References

K. N. Aye, (2013) “A platform for big data analytics on distributed scale-out storage system,”.

Zhao, Changming & Peng, Ruimin & Wu, Dongrui. (2023). “Bagging and Boosting Fine-Tuning for Ensemble Learning”, IEEE Transactions on Artificial Intelligence. PP. 1-15. 10.1109/TAI.2023.3296685.

Drucker, Harris & Cortes, Corinna & Jackel, Larry & Lecun, Yann & Vapnik, Vladimir. (1994). “Boosting and Other Ensemble Methods”, Neural Computation. 6. 1289-1301.

Eric Bauer and Ron Kohavi. (1999) “An empirical comparison of voting classification algorithms: Bagging, boosting, and variants in Machine learning”, https://link.springer.com/ article/10.1023/A:1007515423169.

Leo Breiman.(1996) “Bagging predictors. Machine learning”, https://link.springer.com/article/10. 1007/BF00058655.

Odegua, Rising. (2023). “An Empirical Study of Ensemble Techniques (Bagging, Boosting and Stacking)”, Int J Environ Res Public Health.; 20(6): 4977. doi: 10.3390/ijerph20064977.

Zhang, W., Jiang, J., Shao, Y., Cui, B., (2020), “Snapshot boosting: a fast ensemble framework for deep neural networks”, Science China Informat. Sci. 63 (1), 1–12.

Sagi, O., Rokach, L., (2018), “Ensemble learning: A survey”, Wiley Interdiscip. Rev.: Data Min. Knowledge Discov. 8 (4), e1249.

Vilalta R. and Drissi Y. (2002), “A Perspective View and Survey of Meta-Learning”, Journal of Artificial Intelligence Review, 18 (2), pp.77-95.

Saso D., and Bernard Z (2004), “Is Combining Classifiers with Stacking Better than Selecting the Best One?”, Machine Learning, 54, Kluwer Academic Publishers, Netherlands, pp.255–273.

Domingos Pedro (1998), “Knowledge Discovery via Multiple Models”, Intelligent Data Analysis, 2, pp.187-202.

Ting, K. M., and Witten, I. H. (1999), “Issues in stacked generalization”, Journal of Artificial Intelligence Research, 10, pp.271–289.

Breiman L. (1996), “Bagging predictors”, Machine Learning, vol. 24, pp.123–140.

Oza N. C. and Tumer K. (2008), “Classifier ensembles: Select realworld applications,” Information Fusion, vol. 9, no.1, pp. 4–20.

Polikar R. (2006), “Ensemble-based systems in decision making,” IEEE Circuits System Mag., vol. 6, no. 3, pp. 21–45.

Rokach L. (2010), “Ensemble-based classifiers,” Artificial Intelligence Review, vol.33, pp.1-39.

Islam R. and Abawajy J. (2013), “A multi-tier phishing detection and filtering approach”, Journal of Network and Computer Applications, vol. 36, pp.324–335.

Kelarev A.V., Stranieri A., Yearwood J.L., Abawajy J., Jelinek H.F. (2012), “Improving Classifications for Cardiac Autonomic Neuropathy Using Multi-level Ensemble Classifiers and Feature Selection Based on Random Forest”, In Proceedings of the Tenth Australasian Data Mining Conference (AusDM 2012), Sydney, Australia, pp.93-101.

Melville P. and Mooney R. J. (2005), “Creating diversity in ensembles using artificial data”, Information Fusion, vol.6, pp.99-111.

Freund Y., Schapire R. (1996), “Experiments with a new boosting algorithm”, Proceedings of 13th International Conference of Machince Learning, pp. 148-56.

D. Mustafa, (2022) "A Survey of Performance Tuning Techniques and Tools for Parallel Applications," in IEEE Access, vol. 10, pp. 15036-15055, doi: 10.1109/ACCESS.2022.3147846.

Downloads

Published

26.03.2024

How to Cite

M. Bhargavi Krishna. (2024). Performance Evaluation of Proposed Deep Ensemble Method Algorithms in Distributed and Traditional Computing Environments for Structured Data Analysis. International Journal of Intelligent Systems and Applications in Engineering, 12(21s), 4452 –. Retrieved from https://ijisae.org/index.php/IJISAE/article/view/6328

Issue

Section

Research Article