Self-Healing Test Automation Framework Using Autonomous ML Agents for Real-Time Test Maintenance and Failure Recovery
Keywords:
Self-healing automation, autonomous agents, machine learning, test maintenance, failure recovery, dynamic test execution, intelligent test automation, real-time software testing, reinforcement learning.Abstract
As software systems evolve rapidly, maintaining automated test suites has become increasingly difficult. Frequent code changes often break test scripts, leading to unreliable test results and rising maintenance costs. This paper introduces a self-healing test automation framework that uses machine learning (ML) agents to adapt to changes in real time. By combining anomaly detection, flexible locator strategies, and reinforcement learning, the system can automatically identify and fix broken tests without manual intervention. It’s designed to reduce flaky tests, speed up recovery from failures, and keep test coverage stable even in fast-changing environments. Our experiments using industry-standard applications show a 38% drop in manual test maintenance and a 45% boost in test execution stability. The framework moves us closer to truly autonomous testing systems that can learn, adapt, and grow with the software they test.
Downloads
References
Leotta, M., Ricca, F., & Tonella, P. (2013). Improving test suites maintainability through UI locator analysis. In: Software Testing, Verification and Validation (ICST), 2013 IEEE Sixth International Conference on, pp. 131–140. IEEE.
https://www.cs.umd.edu/~atif/papers/MemonTOSEM2008.pdf
Bruno Camara, Marco Silva, Andre Endo, and Silvia Vergilio. 2021. On the use of test smells for prediction of flaky tests. In Proceedings of the 6th Brazilian Symposium on Systematic and Automated Software Testing (SAST '21). Association for Computing Machinery, New York, NY, USA, 46–54. https://doi.org/10.1145/3482909.3482916
M. Bagherzadeh, N. Kahani and L. Briand, "Reinforcement Learning for Test Case Prioritization," in IEEE Transactions on Software Engineering, vol. 48, no. 8, pp. 2836-2856, 1 Aug. 2022, doi: 10.1109/TSE.2021.3070549.
M. Mirzaaghaei, F. Pastore and M. Pezzè, "Automatically repairing test cases for evolving method declarations," 2010 IEEE International Conference on Software Maintenance, Timisoara, Romania, 2010, pp. 1-5, doi: 10.1109/ICSM.2010.5609549.
Mehdi Mirzaaghaei, Fabrizio Pastore, and Mauro Pezze. 2010. Automatically repairing test cases for evolving method declarations. In Proceedings of the 2010 IEEE International Conference on Software Maintenance (ICSM '10). IEEE Computer Society, USA, 1–5. https://doi.org/10.1109/ICSM.2010.5609549
Moritz Beller, Georgios Gousios, and Andy Zaidman. 2017. Oops, my tests broke the build: an explorative analysis of Travis CI with GitHub. In Proceedings of the 14th International Conference on Mining Software Repositories (MSR '17). IEEE Press, 356–367. https://doi.org/10.1109/MSR.2017.62
Atif M. Memon and Myra B. Cohen. 2013. Automated testing of GUI applications: models, tools, and controlling flakiness. In Proceedings of the 2013 International Conference on Software Engineering (ICSE '13). IEEE Press, 1479–1480.
S. Yoo and M. Harman, “Regression Testing Minimization, Selection and Prioritization: A Survey,” Software Testing, Verification and Reliability, Vol. 22, No. 9, 2012, pp. 67-120.
Benjamin Busjaeger and Tao Xie. 2016. Learning for test prioritization: an industrial case study. In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2016). Association for Computing Machinery, New York, NY, USA, 975–980. https://doi.org/10.1145/2950290.2983954
Bao N. Nguyen, Bryan Robbins, Ishan Banerjee, and Atif Memon. 2014. GUITAR: an innovative tool for automated testing of GUI-driven software. Automated Software Engg. 21, 1 (March 2014), 65–105. https://doi.org/10.1007/s10515-013-0128-9
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.


