Pricing Model - Aware Task Scheduling in Cloud paradigm enhanced with DRL on MAPE Framework

Authors

  • Shruthi P. S., D. R. Umesh

Keywords:

Reinforcement learning, Deep Reinforcement learning, MAPE architecture, VM Pricing model, Q learning.

Abstract

Cloud computing being a multifarious technology offers greater services to the end users, on a serious note several challenges need to be addressed to date even though numerous researches have been conducted in this field. Among the major issues like security, sustainability, availability, and many more, efficient resource allocation according to the requirement is important. Efficient resource allocation has become a critical issue in cloud computing because over-provisioning increases financial risk both at the provider’s site and end-user; under-provisioning increases the service latency it may violate service level agreements eventually providers lose their customers. Hence many research works are going on to come up with an optimal solution for resource allocation in the cloud paradigm in different ways, like load balancing, workflow scheduling, container positioning, QOS parameter-based scheduling, etc. Our work addresses the task secluding by using the Deep Reinforcement technique corresponding with MAPE architecture, where the process of task scheduling is evolved over the various stages of MAPE architecture combining the VM pricing models (reserve, on-demand, and spot). We implemented our technique on the BitBrains dataset which consists of traces of 1750 VM. The results are discussed on variants of Reinforcement techniques and concluded the Reinforcement combining with neural network i.e DeepReinforcement technique with VM pricing model (DRL with PM) shows better results compared to other techniques. Our work's throughput is compared to those of others who achieved promising results, validating that our approach to task scheduling yields superior outcomes.

Downloads

Download data is not yet available.

References

Sara Kardani-Moghaddam, Rajkumar Buyya,” ADRL: A Hybrid Anomaly-Aware Deep Reinforcement Learning-Based Resource Scaling in Clouds”, IEEE Transactions on Parallel and Distributed Systems, VOL. 32, NO. 3, March 2021.

Yisel Garí, David A. Monge, Elina Pacini, Cristian Mateos, Carlos García Garino, “Reinforcement Learning-based Application Autoscaling in the Cloud: A Survey”, arXiv:2001.09957, Top of Form https://doi.org/10.48550/arXiv.2001.09957,2020

Mohit Kumar S.C. Sharma, Anubhav sGoel, S.P. Singh, “A comprehensive survey for scheduling techniques in cloud computing”, Published by Elsevier Ltd Journal of Network and Computer Applications 143 (2019) 1–33, 2019.

Raouf Boutaba, Mohammad A. Salahuddin, Noura Limam, Sara Ayoubi, Nashid Shahriar, Felipe Estrada-Solano and Oscar M. Caicedo, “A comprehensive survey on machine learning for networking: evolution, applications and research opportunities”, Journal of Internet Services and Applications, by Springer Open,2018.

Guangyao Zhou, Wenhong Tian, Rajkumar Buyya, “Deep Reinforcement Learning-based Methods for Resource Scheduling in Cloud Computing: A Review and Future”, arXiv preprint arXiv:2105.04086, 2021.

Teemu J. Ikonen, Keijo Heljanko, Iiro Harjunkoski, “Reinforcement learning of adaptive online rescheduling timing and computing time allocation” in Computers and Chemical Engineering volume 141, 25 June 2020.

Seyed Mohammad Reza Nouri, Han Li Srikumar Venugopal, Wenxia Guo, MingYun He, Wenhong Tian “Autonomic decentralized elasticity based on a reinforcement learning controller for cloud applications”, Elsevier Future Generation Computer Systems 94 (2019) 765–780, 2018.

Merzoug Soltane, Yudith Cardinale, Rafael Angarita, Philippe Rosse, Marta Rukoz, Derdour Makhlouf, Kazar Okba, “A Self-adaptive Agent-based System for Cloud Platforms ” , IEEE explorer 3rd International Conference on Pattern Analysis and Intelligent Systems (PAIS),2019.

Junjie Cen and Yongbo Li,” Resource Allocation Strategy Using Deep Reinforcement Learning in Cloud-Edge Collaborative Computing Environment” Hindawi Mobile Information Systems Volume 2022, Article ID 9597429, 10 pages.

J. V. Bibal Benifa, D. Dejey, “RLPAS: Reinforcement Learning-based Proactive Auto-Scaler for Resource Provisioning in Cloud Environment”, Springer Science+Business Media, Mobile Networks and Applications 24:1348–1363, 2019.

Yisel Garı´, David A. Monge, Cristian Mateos, Carlos Garcı´a Garino, “Learning budget assignment policies for autoscaling scientific workflows in the cloud”, Springer Science+Business Media, LLC, Cluster Computing (2020) 23:87–105.

YiWei , Daniel Kudenko,Shijun Liu , Li Pan , LeiWu,Xiangxu Meng, “A Reinforcement Learning Based Auto-Scaling Approach for SaaS Providers in Dynamic Cloud Environment” Hindawi Mathematical Problems in Engineering Volume 2019, Article ID 5080647, 11 pages.

Seyed Mohammad Reza Nouri, Han Li, Srikumar Venugopal, Wenxia Guo, MingYun He, Wenhong Tian, “Autonomic decentralized elasticity based on a reinforcement learning controller for cloud applications”, Future Generation Computer Systems by Elsevier, 94 (2019) 765–780, 2018.

Mostafa Ghobaei-Arani a, Sam Jabbehdari b, Mohammad Ali Pourmina,” An autonomic resource provisioning approach for service-based cloud applications: A hybrid approach”, Future Generation Computer Systems by Elsevier, 78 (2018) 191–210.

Xin Sui, Dan Liu, Li Li, Huan Wang, Hongwei Yang, “Virtual machine scheduling strategy based on machine learning algorithms for load balancing” EURASIP Journal on Wireless Communications and Networking (2019) 2019:160 by Springer Open.

Bin Wang, Fagui Liu, Weiwei Lin, “Energy-efficient VM scheduling based on deep reinforcement learning”, Future Generation Computer Systems 125 (2021) 616–628 by Elsevier.

Xingjia Li, Li Pan, Shijun Liu, “A DRL-based online VM scheduler for cost optimization in cloud brokers”, Springer Science+Business Media, LLC by Springer, 2023.

Downloads

Published

26.03.2024

How to Cite

D. R. Umesh , . S. P. S. . (2024). Pricing Model - Aware Task Scheduling in Cloud paradigm enhanced with DRL on MAPE Framework. International Journal of Intelligent Systems and Applications in Engineering, 12(3), 1612–1619. Retrieved from https://ijisae.org/index.php/IJISAE/article/view/5560

Issue

Section

Research Article