Energy-Efficient Routing Protocols in Mobile Ad-hoc Networks (MANETs) Using Machine Learning Optimization
Keywords:
MANET, Energy Efficiency, Reinforcement Learning, Routing Protocols, Optimization, Disaster Response.Abstract
Mobile Ad-hoc Networks (MANETs) play a crucial role in scenarios where fixed infrastructure is unavailable, such as disaster response and military operations. However, their resource-constrained nature makes energy efficiency a critical concern. Traditional routing protocols, while effective in stable networks, often fail to adapt dynamically to changing topology and traffic conditions. This research proposes the integration of reinforcement learning (RL) algorithms into MANET routing protocols to optimize energy consumption while maintaining network performance. By dynamically adjusting routing paths based on feedback from network states, RL-based approaches enhance both packet delivery ratio and latency performance. Simulation experiments demonstrate that the proposed RL-optimized protocol reduces energy consumption by 18–25% compared to traditional AODV and DSR protocols, with improved resilience in highly mobile and resource-constrained environments. The findings highlight the potential of AI-driven routing to support energy-efficient and reliable MANETs for mission-critical applications.
Downloads
References
C. Perkins, E. Belding-Royer, and S. Das, “Ad hoc On-Demand Distance Vector (AODV) Routing,” IETF RFC 3561, 2003.
D. B. Johnson, D. A. Maltz, and J. Broch, “DSR: The dynamic source routing protocol for multi-hop wireless ad hoc networks,” Ad Hoc Networking, vol. 5, pp. 139–172, 2001.
T. Clausen and P. Jacquet, “Optimized Link State Routing Protocol (OLSR),” IETF RFC 3626, 2003.
S. Singh, M. Woo, and C. S. Raghavendra, “Power-aware routing in mobile ad hoc networks,” Proceedings of the 4th Annual ACM/IEEE International Conference on Mobile Computing and Networking, pp. 181–190, 1998.
J. Chang and L. Tassiulas, “Energy conserving routing in wireless ad hoc networks,” IEEE INFOCOM 2000, pp. 22–31.
L. Fu, S. Zhong, and X. Cheng, “Self-learning routing in mobile ad hoc networks,” IEEE Transactions on Mobile Computing, vol. 7, no. 9, pp. 1137–1149, 2008.
H. Ye, L. Liang, and G. Y. Li, “Deep reinforcement learning based resource allocation for V2V communications,” IEEE Transactions on Vehicular Technology, vol. 68, no. 4, pp. 3163–3173, 2019.
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.


