Reinforcement Machine Learning-based Improved Protocol for Energy Efficiency on Mobile Ad-Hoc Networks
Keywords:
Mobile ad-hoc network, Reinforcement learning, K-Means Clustering, Machine Learning, Clustering, Ad-hoc on demand distance vector etcAbstract
Mobile Ad-Hoc Networks (MANETs) are crucial in environments lacking permanent infrastructure, with energy efficiency being a primary concern due to the reliance on battery-powered devices. This study presents an innovative solution: the Reinforcement Machine Learning-enhanced Energy Efficient AODV (Ad-Hoc On-Demand Distance Vector) Protocol (RML-EEAODV). This novel approach integrates the adaptive capabilities of reinforcement machine learning with the AODV routing protocol to forge a smart, energy-conserving routing mechanism. The core challenge in MANETs is minimizing energy use and operational overhead while ensuring optimal packet delivery. RML-EEAODV addresses this by enhancing the AODV protocol's routing decisions. It employs machine learning to enable nodes to maintain and utilize a dynamic database of state information for intermediate nodes along potential routes. This database informs decision-making for forwarding packets, ensuring routes with guaranteed Quality of Service (QoS). The RML-EEAODV protocol significantly improves energy efficiency and reduces network overhead, while maintaining a satisfactory packet delivery ratio.
Downloads
References
Zheng Z, Jiang S, Feng R, Ge L, Gu C. Survey of Reinforcement-Learning-Based MAC Protocols for Wireless Ad Hoc Networks with a MAC Reference Model. Entropy. 2023 Jan 3;25(1):101.
Godfrey D, Suh B, Lim BH, Lee KC, Kim KI. An Energy-Efficient Routing Protocol with Reinforcement Learning in Software-Defined Wireless Sensor Networks. Sensors. 2023 Oct 13;23(20):8435.
Khan MF, Yau KL, Ling MH, Imran MA, Chong YW. An Intelligent Cluster-Based Routing Scheme in 5G Flying Ad Hoc Networks. Applied Sciences. 2022 Apr 6;12(7):3665.
Teixeira LH, Huszák Á. Reinforcement learning environment for advanced Vehicular Ad Hoc Networks communication systems. Sensors. 2022 Jun 23;22(13):4732.
Saini HK, Jain K. A Novel Deep Energy Efficient Hello Packet Scheduling for Ad Hoc Networks in Unmanned Aerial Vehicles. Engineered Science. 2023 Mar 19;22:843.
Rezwan S, Choi W. A survey on applications of reinforcement learning in flying ad-hoc networks. Electronics. 2021 Feb 11;10(4):449.
Rashid K, Saeed Y, Ali A, Jamil F, Alkanhel R, Muthanna A. An Adaptive Real-Time Malicious Node Detection Framework Using Machine Learning in Vehicular Ad-Hoc Networks (VANETs). Sensors. 2023 Feb 26;23(5):2594.
Omoniwa B, Galkin B, Dusparic I. Communication-enabled deep reinforcement learning to optimise energy-efficiency in UAV-assisted networks. Vehicular Communications. 2023 Oct 1;43:100640.
Khan MT, Adholiya A. Energy Efficient Multi Hop D2D Communication Using Deep Reinforcement Learning in 5G Networks.
Li X, Wei X, Chen S, Sun L. Multi-agent deep reinforcement learning based resource management in SWIPT enabled cellular networks with H2H/M2M co-existence. Ad Hoc Networks. 2023 Oct 1;149:103256.
Wang D, Liu J, Yao D, Member IE. An energy-efficient distributed adaptive cooperative routing based on reinforcement learning in wireless multimedia sensor networks. Computer Networks. 2020 Sep 4;178:107313.
Vishwanathrao BA, Vikhar PA. Reinforcement Machine Learning-based Improved protocol for Mobile Ad-Hoc Networks. Scandinavian Journal of Information Systems. 2023 Oct 19;35(3):469-83.
Jothi Lakshmi S, Karishma M. A Modified DSR Protocol Using Deep Reinforced Learning for MANETS. IETE Journal of Research. 2023 Jun 16:1-2.
Duong TV. An improved method of AODV routing protocol using reinforcement learning for ensuring QoS in 5G-based mobile ad-hoc networks. ICT Express. 2023 Jul 13.
Rengarajan G, Ramalingam N, Suriyan K. Performance enhancement of mobile ad hoc network life time using energy efficient techniques. Bulletin of Electrical Engineering and Informatics. 2023 Jun 15;12(5):2870-7.
Hamamreh RA, Ayyad MR, Abutaha M. RAD: reinforcement authentication model based on DYMO protocol for MANET. International Journal of Internet Protocol Technology. 2023;16(1):46-57.
Ryu J, Kim S. Reputation-Based Opportunistic Routing Protocol Using Q-Learning for MANET Attacked by Malicious Nodes. IEEE Access. 2023 Feb 6.
Upadhyay P, Marriboina V, Goyal SJ, Kumar S, El-Kenawy ES, Ibrahim A, Alhussan AA, Khafaga DS. An improved deep reinforcement learning routing technique for collision-free VANET. Scientific Reports. 2023 Dec 8;13(1):21796.
Park C, Lee S, Joo H, Kim H. Empowering Adaptive Geolocation-Based Routing for UAV Networks with Reinforcement Learning. Drones. 2023 Jun 9;7(6):387.
Kaviani S, Ryu B, Ahmed E, Kim D, Kim J, Spiker C, Harnden B. DeepMPR: Enhancing Opportunistic Routing in Wireless Networks through Multi-Agent Deep Reinforcement Learning. arXiv preprint arXiv:2306.09637. 2023 Jun 16.
Ariman M, Akkoç M, Sari TT, Erol MR, Seçinti G, Canberk B. Energy-efficient RL-based aerial network deployment testbed for disaster areas. Journal of Communications and Networks. 2023 Jan 9;25(1):25-34.
Arun M, Jayanthi R. An adaptive congestion and energy aware multipath routing scheme for mobile ad-hoc networks through stable link prediction. Measurement: Sensors. 2023 Dec 1;30:100926.
Downloads
Published
How to Cite
Issue
Section
License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.