Artificial Intelligence for Accessibility and Performance Auditing: Automated Findings with Human Judgment
Keywords:
Web Accessibility Auditing, WCAG Conformance Evaluation, Automated Detection Coverage, Large Language Model Augmentation, Continuous Auditing PipelineAbstract
Automated accessibility and performance auditing tools have become integral to modern web development pipelines, yet systematic evidence shows that treating their outputs as definitive conformance verdicts leads to programs that are overconfident in coverage and underinvest in expert judgment. Deterministic rule engines reliably surface structural defects at scale but remain fundamentally constrained in their ability to evaluate success criteria requiring semantic interpretation, contextual reasoning, or natural language understanding. Established standard frameworks—structured around principles of perceivability, operability, understandability, and robustness—provide the normative foundation against which both automated and human findings must be mapped to remain institutionally credible and legally defensible. Performance auditing presents a structurally parallel set of challenges, where threshold-based metrics require human disambiguation before remediation decisions can be responsibly made. The empirical boundaries of automated detection are quantified through mutation testing and coverage analysis, confirming that no single tool is sufficient and that tools are structurally complementary rather than interchangeable. Artificial intelligence augmentation extends automated coverage into semantically demanding criteria, achieving meaningful detection rates that conventional rule engines cannot approach, while introducing anchoring risks that demand carefully designed human-in-the-loop workflows. A continuous auditing pipeline with graded confidence tiers — separating high-confidence structural findings, medium-confidence semantic assessments, and low-confidence interaction-dependent evaluations — provides the operational architecture necessary to allocate expert attention proportionally, measure program quality over time, and produce findings that are auditable, reproducible, and defensible across tool versions and evaluation cycles.
DOI: https://doi.org/10.17762/ijisae.v14i1s.8222
Downloads
References
Tolu Adedoja, "Automated Evaluation of Detectable Accessibility Issues on U.S. State Government Homepages: A Baseline Assessment Ahead of the 2026–2027 ADA Title II Deadlines," Research Square, 2026. [Online]. Available: https://www.researchsquare.com/article/rs-8663556/v1
Mahan Tafreshipour et al., "Ma11y: A Mutation Framework for Web Accessibility Testing," ACM Digital Library, 2024. [Online]. Available: https://dl.acm.org/doi/pdf/10.1145/3650212.3652113
Ben Caldwell et al., "Web Content Accessibility Guidelines (WCAG) 2.0," W3C Recommendation, World Wide Web Consortium, Dec. 2008. [Online]. Available: https://www.w3.org/TR/WCAG20/
Fernando Alonso, "Requirements for a Method of Software Accessibility Conformity Assessment," Proc. Int. Conf. Computers for Handicapped Persons (ICCHP), Linz, Austria, 2008. [Online]. Available: https://oa.upm.es/2425/1/INVE_MEM_2008_55928.pdf
André Pimenta Freire et al. “Accessibility Inspections Using the Web Content Accessibility Guidelines by Novice Evaluators: An Experience Report,” ACM Digital Library, 2024. https://doi.org/10.1145/3702038.3702040
Karol Król and Wojciech Sroka. “Internet in the Middle of Nowhere: Performance of Geoportals in Rural Areas According to.” MDPI, 2023. https://doi.org/10.3390/ijgi12120484
Juho Vepsäläinen et al., “Overview of Web Application Performance Optimization Techniques,” arXiv, 2024. https://arxiv.org/pdf/2412.07892
Jonathan Robert Pool, “Accessibility Metatesting: Comparing Nine Testing Tools.” W4A '23: Proceedings of the 20th International Web for All Conference, 2023. https://doi.org/10.1145/3587281.3587282
Thomas Fischer et al., “Coverage of web accessibility guidelines provided by automated checking tools.” Universal Access in the Information Society, 2025. https://link.springer.com/content/pdf/10.1007/s10209-025-01263-x.pdf
Heidilyn V. Gamido, Marlon V. Gamido. "Comparative review of the features of automated software testing tools.” International Journal of Electrical and Computer Engineering, 2019. https://www.researchgate.net/profile/Heidilyn-Gamido/publication/335928031
Juan-Miguel López-Gil and Juanan Pereira, "Turning manual web accessibility success criteria into automatic: an LLM-based approach," Universal Access in the Information Society, vol. 24, pp. 837–852, March 2025. https://doi.org/10.1007/s10209-024-01108-z
ZIYAO HE et al. (2025). “Enhancing web accessibility: Automated detection of issues with generative AI,” Proceedings of the ACM on Software Engineering. https://doi.org/10.1145/3729371
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.


