International Journal of Intelligent Systems and Applications in Engineering https://ijisae.org/index.php/IJISAE <div style="border: 3px solid black; padding: 10px; background-color: aliceblue;"> <p style="margin: 5px; font-size: 15px;"><strong style="font-size: 20px;"><u>Update Regarding Article's Indexing:</u></strong><br />Dear esteemed authors and readers,<br />We are pleased to inform you that the <strong>International Journal of Intelligent Systems and Applications in Engineering (IJISAE)</strong> has successfully passed the re-evaluation process by <strong>Elsevier</strong>. This achievement reflects our commitment to maintaining the highest standards of quality in academic publishing.<br />We are also excited to announce that our pending articles will start getting indexed in Scopus in 6 weeks. This is a significant milestone for us, and we believe it will enhance the visibility and accessibility of our published research.<br />We would like to express our gratitude to all our authors, reviewers, and readers for their continuous support and contributions towards making IJISAE a leading platform for scholarly research in the field of intelligent systems and applications in engineering.<br />We look forward to continuing to provide a high-quality platform for academic exchange and encourage all interested authors to submit their best work to IJISAE.<br /><br />Best regards,<br />The IJISAE Editorial Team</p> <br /> <p style="margin: 5px; font-size: 15px;"><strong style="font-size: 20px;"><u>Information for Authors:</u></strong><br />We are pleased to inform that we are now collaborating with <strong>Digital Commons, Elsevier</strong> for much better visibility of journal. Further authors will be able to observe their citations, metric like PlumX from journal website itself. <strong>IJISAE</strong> will be in transition from <strong>OJS</strong> to <strong>Digital Commons Platform</strong> in next few months so if their is any queries or delays contact directly on <em><strong>editor@ijisae.org</strong></em></p> </div> <p><strong><a href="https://ijisae.org/IJISAE">International Journal of Intelligent Systems and Applications in Engineering (IJISAE)</a></strong> is an international and interdisciplinary journal for both invited and contributed peer reviewed articles that intelligent systems and applications in engineering at all levels. The journal publishes a broad range of papers covering theory and practice in order to facilitate future efforts of individuals and groups involved in the field. <strong>IJISAE</strong>, a peer-reviewed double-blind refereed journal, publishes original papers featuring innovative and practical technologies related to the design and development of intelligent systems in engineering. Its coverage also includes papers on intelligent systems applications in areas such as nanotechnology, renewable energy, medicine engineering, Aeronautics and Astronautics, mechatronics, industrial manufacturing, bioengineering, agriculture, services, intelligence based automation and appliances, medical robots and robotic rehabilitations, space exploration and etc.</p> <p>As an Open Access Journal, IJISAE devotes itself to promoting scholarship in intelligent systems and applications in all fields of engineering and to speeding up the publication cycle thereof. Researchers worldwide will have full access to all the articles published online and be able to download them with zero subscription fees. Moreover, the influence of your research will rapidly expand once you become an Open Access (OA) author, because an OA article has more chances to be used and cited than does one that plods through the subscription barriers of traditional publishing model.</p> <p><strong>IJISAE (ISSN: 2147-6799)</strong> indexed by <a href="https://www.scopus.com/sourceid/21101021990#tabs=0" target="_blank" rel="noopener">SCOPUS</a>, <a href="https://app.trdizin.gov.tr/dergi/TVRBM05UVT0/international-journal-of-intelligent-systems-and-applications-in-engineering" target="_blank" rel="noopener">TR Index</a>, <a href="https://journals.indexcopernicus.com/search/details?jmlId=3705&amp;org=International%20Journal%20of%20Intelligent%20Systems%20and%20Applications%20in%20Engineering,p3705,3.html">IndexCopernicus</a>, <a href="http://globalimpactfactor.com/intelligent-systems-and-applications-in-engineering-ijisae/%20in%20Engineering,p3705,3.html" target="_blank" rel="noopener">Global Impact Factor</a>, <a href="http://cosmosimpactfactor.com/page/journals_details/6400.html" target="_blank" rel="noopener">Cosmos</a>, <a href="https://scholar.google.com.tr/scholar?q=IJISAE&amp;btnG=&amp;hl=tr&amp;as_sdt=0%2C5">Google Scholar</a>, <a href="http://www.journaltocs.ac.uk/index.php?action=search&amp;subAction=hits&amp;journalID=29745" target="_blank" rel="noopener">JournalTocs</a>, <a href="https://www.idealonline.com.tr/IdealOnline/lookAtPublications/journalDetail.xhtml?uId=679" target="_blank" rel="noopener">IdealOnline</a>, <a href="http://oaji.net/journal-detail.html?number=5475" target="_blank" rel="noopener">OAJI</a>, <a href="https://www.researchgate.net/journal/International-Journal-of-Intelligent-Systems-and-Applications-in-Engineering-2147-6799" target="_blank" rel="noopener">ResearchGate</a>, <a href="http://esjindex.org/search.php?id=2455" target="_blank" rel="noopener">ESJI</a>, <a href="https://search.crossref.org/" target="_blank" rel="noopener">Crossref</a>, and <a href="https://portal.issn.org/resource/ISSN/2147-6799" target="_blank" rel="noopener">ROAD</a>.</p> <p>Please Contact: <a href="mailto:editor@ijisae.org">editor@ijisae.org</a></p> <p><img style="width: 36px; height: 36px;" src="https://ijisae.org/public/site/images/ilkerozkan/about-the-author-1.jpg" alt="" align="left" /></p> <p><strong>Submit your manuscripts </strong><a style="color: blue;" href="http://manuscriptsubmission.net/ijisae/index.php/submission/about/submissions#authorGuidelines">Detail information for authors </a></p> <p><strong>Publication Fee:</strong> 600 USD (The APC is calculated based on the number of pages and color figures per page of the final accepted manuscript. Charges are fix 600 USD for first 10 pages. For manuscripts exceeding 10 pages, there will be an additional charge of USD 95 per additional page.)</p> en-US International Journal of Intelligent Systems and Applications in Engineering 2147-6799 <p>All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.</p> <p>IJISAE open access articles are licensed under a&nbsp;<a href="http://creativecommons.org/licenses/by-sa/4.0/" target="_blank" rel="noopener">Creative Commons Attribution-ShareAlike 4.0 International License</a>.&nbsp;This license lets the audience to&nbsp;give&nbsp;appropriate credit, provide a link to the license, and&nbsp;indicate if changes were made and if they&nbsp;remix, transform, or build upon the material, they must distribute contributions under the&nbsp;same license&nbsp;as the original.</p> A Hybrid Deep Learning Approach for Predicting Patient Health Outcomes in Mobile Healthcare Applications https://ijisae.org/index.php/IJISAE/article/view/8055 <p>Along with mobile health care apps, deep learning has transformed health monitoring and prediction. A hybrid approach based on deep learning for mobile health systems for precise patient health outcome prediction is proposed in this paper. It exploits Convolutional Neural Networks (CNN) to extract the features followed by Long Short Term Memory (LSTM) networks to learn from the sequential pattern for efficient analysis of the patients' vitals, past medical history and real-time sensor data. Also Attention Mechanism plays very significant role in highlighting important health parameters thus interprets and explains levels of data which helps in decision improvement through the model. We train the hybrid model on heterogeneous healthcare data and test it with accuracy, precision, recall and F1-score. The experimental results demonstrate significant benefits in terms of predictive consistency and real-time flexibility than traditional deep learning models. This framework could change the base of mobile healthcare applications to initiate early disease detection, personal treatment recommendations, and timely involvement in the patient journey that would facilitate healthier and more effective healthcare.</p> Akhil Tirumalasetty Copyright (c) 2026 Akhil Tirumalasetty http://creativecommons.org/licenses/by-sa/4.0 2026-02-14 2026-02-14 14 1s 01 10 Efficient Large-Scale Data based on Big Data Framework using Critical Influences on Financial Landscape https://ijisae.org/index.php/IJISAE/article/view/8056 <p>One of the most recent commercial and technological concerns in the technological era is big data. Hundreds of millions of events occur on an ongoing basis. The financial sector is significantly involved in the computation of big data events. As a result, hundreds of millions of financial transactions occur in the financial industry each day. Financial practitioners and analysts perceive it as an emerging challenge in the data administration and analytics of a variety of financial products and services. In addition, financial services and products are significantly affected by big data. Determining the financial concerns that big data significantly affects is, thus, an important topic to research with the impacts. This paper used these concepts to show the current state of finance and how big data affects financial markets, institutions, internet finance, financial management, internet credit service companies, fraud detection, risk analysis, financial application management, and more. The connection between big data and economic aspects can be better understood by doing an exploratory literature review of secondary data sources. Because big data in finance is a relatively new concept, further research directions will be proposed at the end of this study.</p> Bhanu Prakash Paruchuri Copyright (c) 2026 Bhanu Prakash Paruchuri http://creativecommons.org/licenses/by-sa/4.0 2026-02-14 2026-02-14 14 1s 11 21 A Physics-Informed Neural Network Framework for MHD Casson Ternary and Tetra Hybrid Nanolubricant Flow https://ijisae.org/index.php/IJISAE/article/view/8084 <p>The heat and mass transport properties of Casson hybrid nanofluids flowing across a stretched surface in the presence of thermal radiation, Joule heating, and a magnetic field are examined in this work. We look at two sophisticated nano-lubricant arrangements. , ZnO, and SiC nanoparticles suspended in engine oil make up the first ternary hybrid nanofluid. Graphene nanoplatelets (GNPs) are added to the ternary mixture to create the second tetra hybrid nanofluid. Comparing the effects of nanoparticle composition on energy dissipation mechanisms, flow behavior, and thermal conductivity is the aim. Joule heating, radiative heat flux, thermo-diffusion, and chemical reaction effects are all included in the mathematical formulation. The controlling nonlinear partial differential equations are reduced to a linked system of ordinary differential equations by means of appropriate similarity transformations. A Physics Informed Neural Network (PINN) method designed especially for nanofluid lubrication systems is used to solve these equations. By directly integrating the governing physical laws into the loss function, the suggested PINN architecture enables the simultaneous elimination of boundary condition errors and equation residuals. Computational efficiency and solution stability are improved by this two-way optimization. Also wed did Numerical Validation of the PINN Solver Comparing the tetra hybrid nanofluid to the ternary formulation, numerical results show that the former offers noticeably greater thermal enhancement and lower entropy generation. GNPs' remarkable heat conductivity and enormous surface area are primarily responsible for this performance enhancement. On the other hand, the ternary hybrid nanofluid shows moderate temperature gradients and comparatively constant viscosity behavior. For complicated nonlinear thermal-fluid problems in lubrication applications, the PINN framework provides a dependable computational tool with good convergence and prediction accuracy overall.</p> <p>DOI: <a href="https://doi.org/10.17762/ijisae.v14i1s.8084">https://doi.org/10.17762/ijisae.v14i1s.8084</a></p> Praveen Kumar U M Copyright (c) 2026 http://creativecommons.org/licenses/by-sa/4.0 2026-02-14 2026-02-14 14 1s 22 40 Distributed AI Systems: Building Scalable and Safe LLM Orchestration Layers https://ijisae.org/index.php/IJISAE/article/view/8086 <p>Distributed artificial intelligence systems, a new model for integrating large language models with enterprise infrastructure, require orchestration layers to coordinate large models across heterogeneous computing environments. These orchestration frameworks address issues such as retrieving context, controlling execution, managing system state, and ensuring observability, improving the overall effectiveness of the deployment. Retrieval-augmented generation (RAG) is a major model for LLMs to complement model output with grounded information to reduce hallucinations, using hybrid retrieval architectures combining lexical and dense retrieval with multi-agent coordination patterns, organising specialised autonomous agents to decompose compositional reasoning problems into subproblems, and enabling efficient pinpointing of semantically relevant documents. Policy-aware execution mechanisms implement security functionalities, such as authorization gates and context sanitization pipelines, that respect zero-trust principles during inference via mutual authentication and encryption protocols. Fault tolerance mechanisms address probabilistic failures unique to language model inference, including token truncation and semantic coherence degradation. Scalability patterns employ horizontal and vertical strategies to maintain performance under variable workloads while preserving tenant isolation boundaries. This article presents architectural patterns, performance benchmarks, and governance frameworks for production-ready language model systems that meet enterprise goals for reliability, security, and regulatory compliance. This work is informed by production deployment patterns and operational metrics observed in large-scale enterprise language model systems, emphasizing practical applicability over purely theoretical analysis.</p> Sahil Agarwal Copyright (c) 2026 http://creativecommons.org/licenses/by-sa/4.0 2026-02-14 2026-02-14 14 1s 41 48 Numerical Solution of the 2D Cauchy–Riemann System Using Classical and Quantum-Inspired Finite Difference and Crank–Nicolson Schemes https://ijisae.org/index.php/IJISAE/article/view/8098 <p>The Cauchy–Riemann (CR) equations form the fundamental condition for analyticity in complex analysis and arise in potential theory, fluid mechanics, and electromagnetic field modeling. In this study, the two-dimensional Cauchy–Riemann system is solved numerically under prescribed Dirichlet boundary conditions using four approaches: (i) Finite Difference (FD), (ii) Quantum-Inspired Finite Difference (QI-FD), (iii) Crank–Nicolson (CN), and (iv) Quantum-Inspired Crank–Nicolson (QI-CN). Full mathematical derivations of discretization schemes are provided. The quantum-inspired schemes introduce amplitude-modulated update operators motivated by quantum probability dynamics. Comparative simulations demonstrate convergence behavior, stability properties, and error characteristics. Multiple graphical outputs including surface plots, contour maps, error heatmaps, and convergence curves are presented.</p> Mitat Uysal Copyright (c) 2026 http://creativecommons.org/licenses/by-sa/4.0 2026-02-26 2026-02-26 14 1s 49 52 Designing Reliable Event-Driven Enterprise Platforms Using Apache Kafka https://ijisae.org/index.php/IJISAE/article/view/8117 <p>Enterprise platforms in domains such as digital payments, supply chains, and customer engagement increasingly leverage event-driven architectures to achieve real-time data propagation, service decoupling, and horizontal scalability. Apache Kafka has emerged as a foundational element to build high-throughput, fault-tolerant messaging systems that can sustain event streams across distributed architectures. Kafka-based systems require discipline across delivery semantics, partitioning, consumer group coordination, back pressure, and schema evolution. Exactly-once semantics are achieved through idempotent producers and transactional APIs to avoid duplicate processing with throughput that is sufficient for production workloads at enterprise scale. Partition keys that match business rules help keep the order of transactions, while adjusting the number of consumers based on lag and controlling producer access help maintain system stability during different load levels. Schema compatibility enforcement via registry-driven governance keeps producers from accidentally publishing incompatible breaking changes to production topics. Together, these architectural and operational principles provide the durability, correctness, and resilience required from enterprise-grade event processing in the modern system of record when building Kafka-based platforms.</p> <p>&nbsp;</p> Chandramouli Holigi Copyright (c) 2026 Chandramouli Holigi http://creativecommons.org/licenses/by-sa/4.0 2026-03-24 2026-03-24 14 1s 53 59 From Sampling to Population Testing: Continuous Audit Analytics for ICFR Effectiveness https://ijisae.org/index.php/IJISAE/article/view/8118 <p>Internal control over financial reporting has historically depended on periodic, sample-based testing methods that create measurable coverage gaps across high-volume transaction populations. The transition to continuous audit analytics represents a fundamental shift in assurance architecture—from discrete, interval-driven sampling to automated, population-level control testing executed in real time. This article examines the structural drawbacks of conventional sampling models, proposes a three-layer continuous audit architecture integrating deterministic testing, anomaly detection, and behavioral analytics, and redefines key controls within the context of algorithmic execution and machine learning-driven fraud detection. An implementation pathway progressing through foundation, build, operate, and optimize phases is presented alongside the operational governance metrics required to sustain continuous ICFR effectiveness. The convergence of enterprise resource planning infrastructure, big data analytics, and artificial intelligence has rendered full-population testing operationally deployable, compressing control failure detection timelines and strengthening the reliability of financial reporting assurance in ways that periodic audit cycles are structurally unable to achieve.</p> Karishma Velisetty Copyright (c) 2026 Karishma Velisetty http://creativecommons.org/licenses/by-sa/4.0 2026-03-24 2026-03-24 14 1s 60 67 Designing High-Performance Distributed Systems for In-Memory Secure Data Processing in Cloud Security Analytics https://ijisae.org/index.php/IJISAE/article/view/8122 <p>The surge of cloud-based apps and advanced cyber threats has led to a huge demand for high-powered security analytics that can ingest and process enormous amounts of data in real time. Conventional disk-based centralized security analysis systems tend to have high latency, limited scalability and insufficient privacy of sensitive data. To cope with these issues, we introduce in this paper the design of a high-performance distributed in-memory secure data processing system for cloud security analytics. The proposed model employs distributed in-memory computing, parallel processing and secure data management techniques to support us with low-latency threat analysis and real-time analytics. Advanced security features, such as data encryption in memory, secure access management, and isolation across distributed nodes are included to maintain the confidentiality and integrity of data during analytics processing. The system is deployable in a scalable form factor across cloud platforms, and yet also achieves fault tolerance and resource efficiency. Experimental results show large savings in terms of processing, types response and scalability of traditional disk-centric security analytics platforms. The results show in-memory distributed processing can provide a viable platform for next-generation cloud security analytics, leading to faster threat identification, increased operation efficiency, and strengthened data protection in the ever-evolving cloudy world.</p> Akhil Karrothu Copyright (c) 2026 http://creativecommons.org/licenses/by-sa/4.0 2026-02-25 2026-02-25 14 1s 68 76 AI-Based Predictive Maintenance for General Aviation Aircraft https://ijisae.org/index.php/IJISAE/article/view/8123 <p>Advances in artificial intelligence have brought many new possibilities into predictive maintenace. this happens especialy on general aviation. Predictive maintenance, which uses artificial intelligence to predict when machinery will break down, is revolutionizing how work needs to be done. It allows us to focus less on reparing things we've already broken and focus more on keeping things up and running smoothly. This paper analyzes how artificial intelligence is integrated into predictive maintenance systems with the goal of doing away with orderly current aircraft, analyzing methodologies that use data analytics and also predictive machine learning to predict the failure of components and schedule maintenance accordingly. This article talks about the large amount of benifits AI has on the aviation inditrly. Such as better sefety, less expence, and it makes everything run smoother. The Essay coves the issue on what challanges thease companies are facening they are facening challanges on trying to get their systems work. One thing that the AI Advanced PdM System does is it introduces future possibilities for technological advancements in PdM including, but not limited to, edge computing, real time data prediction, and autonomous maintanance. This paper delves deep into what the future holds for maintenance in the state of GA aviation with the use of AI.</p> Sam Suseelan Copyright (c) 2026 http://creativecommons.org/licenses/by-sa/4.0 2026-02-28 2026-02-28 14 1s 77 88 Leveraging AI for Predictive Technical Debt Management in SAP Development Ecosystems: Case Studies and Future Prospects https://ijisae.org/index.php/IJISAE/article/view/8124 <p>Technical debt (TD) acts as the silent killer in massive, integrated SAP ecosystems and is often the main reason projects crash and burn. We simply can’t afford to be reactive anymore; we need to get ahead of the problem with Predictive Technical Debt Management (PTDM). This paper proposes a PTDM framework that uses Artificial Intelligence (AI) to handle three critical jobs: predicting what will break, prioritizing what to fix, and keeping the deployment line moving. We use a binary classification model (Algorithm 1) to guess the odds of an ABAP object failing, and we apply Natural Language Processing (NLP) to support tickets to figure out which bugs are actually hurting the business (Algorithm 2). By wrapping this in a Continuous PTDM Loop (Algorithm 3), we automate the creation of remediation tasks. Our operational case studies like an S/4HANA migration triage and continuous performance forecasting (Algorithm 4) show that this AI-driven approach speeds up custom code cleanup and stabilizes the system by calculating the "interest rate" of debt before it becomes too expensive to pay off. We wrap up by discussing future research into Deep Learning for semantic debt detection and managing debt in cloud-native SAP landscapes.</p> Vamsi Krishna Talasila Copyright (c) 2026 http://creativecommons.org/licenses/by-sa/4.0 2026-02-28 2026-02-28 14 1s 89 95 Adaptive AI Governance in Regulated Enterprise Data Platforms: A Trust-Calibrated Automation Framework https://ijisae.org/index.php/IJISAE/article/view/8126 <p>Artificial intelligence (AI) has become foundational to enterprise data platforms in regulated industries, including financial services, healthcare, and compliance-sensitive digital ecosystems. While AI automation improves spotting unusual patterns, making predictions, and scaling operations, giving more decision-making power to algorithms adds challenges in governance, regulatory risks, and overall system safety. Traditional governance methods that depend on fixed rules or after-the-fact checks are not enough for environments where AI is making decisions, as they fail to account for the dynamic nature of AI systems and the need for real-time oversight and adaptability to changing circumstances, particularly in light of the complex challenges posed by algorithmic bias and regulatory compliance in sectors like healthcare and finance. The Trust-Calibrated Automation (TCA) Framework provides a clear method for handling AI that changes how much automation is used based on the specific risks, rules, and financial importance of different decision-making situations. The framework has various control levels, a method to assess overall risks, systems that focus on important issues based on trust, and elements that make sure the design fixes known problems in AI systems, like algorithmic bias that led to a 50% lower identification of high-need Black patients compared to equally sick White patients in healthcare risk prediction.</p> <p>DOI: <a href="https://doi.org/10.17762/ijisae.v14i1s.8126">https://doi.org/10.17762/ijisae.v14i1s.8126</a></p> Suman Reddy Gaddam Copyright (c) 2026 http://creativecommons.org/licenses/by-sa/4.0 2026-02-27 2026-02-27 14 1s 96 105 Multi-Version Infrastructure for Privacy-Preserving AI/ML Inference at Scale https://ijisae.org/index.php/IJISAE/article/view/8129 <p>As the number of regulatory regimes, multi-stakeholder data relationships, and compliance requirements grows, privacy becomes an increasing architectural concern for large-scale AI/ML systems for data inference. Inference pipelines that apply a single, globally cast restrictive data policy to every inference context incur a measurable decrease in model performance. To avoid degrading model performance through globally restrictive policies while also avoiding potential policy violations introduced by dynamically modifying data usage per request, our multi-version architecture explicitly maintains multiple versions of user and participant information at the feature and embedding levels. In conjunction, context-aware version selection mechanisms deterministically map the metadata describing an incoming request to the appropriate data usage policy at runtime. In turn, versioned feature vectors are generated from superset representations of available signals, with the appropriate version selected based on the incoming request context and its corresponding data usage policy. Model-specific embeddings are derived from their privacy-compliant feature vectors to ensure end-to-end compliance. Rule-based selection schemes, implemented as abstractions decoupled from inference execution code, allow rapid regulatory adaptation without requiring service redeployment. Continuous monitoring helps validate selection quality and detect performance regressions in production environments. The computational overhead introduced by generating and maintaining multiple feature and embedding versions can be reduced through centralized build-once orchestration, shared feature storage schemas, and hybrid offline–online embedding generation within internet-scale latency budgets. Beyond privacy, this architectural pattern generalizes to fairness-aware inference, multi-tenant data isolation, and auditable policy enforcement, enabling versioned features and embedding representations as a foundational primitive for developing trustworthy, policy-compliant AI/ML systems.</p> <p>DOI: <a href="https://doi.org/10.17762/ijisae.v14i1s.8129">https://doi.org/10.17762/ijisae.v14i1s.8129</a></p> Jay Bankimchandra Desai Copyright (c) 2026 Jay Bankimchandra Desai http://creativecommons.org/licenses/by-sa/4.0 2026-03-26 2026-03-26 14 1s 106 111 AI-Assisted Workflow Orchestration in Regulated Healthcare Contact Centers: Architecture, Governance, and Human-in-the-Loop Design Patterns https://ijisae.org/index.php/IJISAE/article/view/8130 <h1><span style="font-size: 10.0pt; line-height: 115%; font-weight: normal;">Healthcare contact centers managing medication access, prior authorization, and benefit coordination operate under sustained pressure—balancing administrative complexity, regulatory obligation, and the expectation of timely, accurate patient support. Artificial intelligence offers meaningful potential to augment these environments, yet the stakes involved demand architectural discipline that many early deployments have underestimated. This article presents a reference architecture and accompanying framework for AI-assisted workflow orchestration in regulated healthcare contact centers that deliberately positions machine learning as an augmentative layer within saga-orchestrated, event-driven architectures rather than as a surrogate for human judgment. Drawing on design patterns from responsible AI, distributed systems architecture, and healthcare interoperability standards, the framework addresses human-in-the-loop orchestration, explainable AI integration, continuous model governance, fairness auditing, and regulatory alignment across FDA, CMS, and emerging international requirements. Operational evidence from specialty pharmacy contact center implementations demonstrates that well-governed AI assistance improves agent decision quality, accelerates therapy access timelines, and supports measurable medication adherence gains in high-risk patient cohorts—without ceding accountability over consequential decisions to autonomous systems. Data governance emerges consistently as the foundational prerequisite determining AI readiness and model performance. Taken together, these architectural patterns, governance mechanisms, and evaluation findings position AI-assisted workflow orchestration in regulated healthcare contact centers as a distinct domain within enterprise healthcare systems architecture, providing a concrete reference model for organizations seeking to modernize contact center platforms and medication access workflows without compromising oversight, equity, or human judgment. The framework is positioned explicitly within the domain of enterprise healthcare systems architecture, with a focus on regulated contact center platforms and workflow orchestration, providing a reusable foundation for organizations seeking to operationalize AI responsibly in high-stakes patient access workflows.</span></h1> <p><span style="font-size: 10.0pt; line-height: 115%; font-weight: normal;">DOI: <a href="https://doi.org/10.17762/ijisae.v14i1s.8130">https://doi.org/10.17762/ijisae.v14i1s.8130</a></span></p> Mohammad Jakeer Mehathar Copyright (c) 2026 Mohammad Jakeer Mehathar http://creativecommons.org/licenses/by-sa/4.0 2026-03-26 2026-03-26 14 1s 112 126 Alexa Smart Home: Pioneering Voice‑Driven Smart Home Integration https://ijisae.org/index.php/IJISAE/article/view/8132 <p>The emergence of voice assistants represents one of the most significant paradigm shifts in human–computer interaction since the graphical user interface. Among these, Amazon Alexa played a foundational role in bringing voice assistants from experimental systems to mass-market consumer adoption. This article presents a scholarly analysis of how voice assistants evolved to market readiness, how Alexa pioneered large-scale smart home integration, and how standardized, cloud-based integration frameworks enabled rapid ecosystem growth. It further documents my original technical and organizational contributions as a founding engineering manager in the Alexa Smart Home organization, focusing on the design of the Smart Home Skill API, capability interface taxonomy, and lifecycle architecture that became the industry's dominant model for voice-controlled Internet of Things (IoT) systems.</p> <p>DOI: <a href="https://doi.org/10.17762/ijisae.v14i1s.8132">https://doi.org/10.17762/ijisae.v14i1s.8132</a></p> Anil Mankali Masakal Copyright (c) 2026 Anil Mankali Masakal http://creativecommons.org/licenses/by-sa/4.0 2026-03-30 2026-03-30 14 1s 127 132 Digital Equity and Embedded AI: Ensuring Accessibility in Smart City Infrastructure https://ijisae.org/index.php/IJISAE/article/view/8133 <p>An array of embedded AI systems becoming widespread in the urban infrastructure puts society at a critical point of juncture with a promise of significantly enhancing the quality of life of all citizens and, at the same time, promoting the worsening of the current inequalities. This detailed review of how algorithm-based implementation of embedded AI use in traffic management, community safety, and utility systems will inevitably introduce or exacerbate social divisions by being biased or not truly algorithmic, by being digital and not designed to be user-friendly. The article provides actionable frameworks that the embedded system architecture can apply to provide equitable benefits of smart cities, based on experiences of successful implementation in digitally inclusive cities. Among the important findings, it can be stated that strategic platform architecture choices, general design principles, and community-oriented development procedures play a crucial role in developing actually smart urban systems that act as bridges to an opportunity instead of barriers to involvement. The article addresses some crucial issues, such as the fact that there is a problem of algorithmic bias in the field of facial recognition and pedestrian detection and the need to design a multi-sensory interface that would accommodate a wide range of abilities. The article also highlights the fact that digital equity is not an additional feature of smart city development but a mandatory condition of sustainable urban change and indicates that inclusive embedded AI platforms offer high technical quality and equitable deliverables.</p> <p>DOI: <a href="https://doi.org/10.17762/ijisae.v14i1s.8133">https://doi.org/10.17762/ijisae.v14i1s.8133</a></p> Ishan Pardesi Copyright (c) 2026 Ishan Pardesi http://creativecommons.org/licenses/by-sa/4.0 2026-03-30 2026-03-30 14 1s 133 140 Best Practices in Process and Digital Transformation: A Cross-Industry Framework for Scalable Impact https://ijisae.org/index.php/IJISAE/article/view/8141 <p>Mounting cross-industry pressure to improve efficiencies at scale has made process and digital transformation a unified strategic imperative. Effective transformation requires reengineering workflows, governance, and data infrastructure, not technology investment alone. When process discipline and digital capability are developed together, organizations shift from reactive, fragmented operations to adaptive, predictive models that deliver sustainable value. Evidence from healthcare, manufacturing, education, and neurology confirms that durable transformation outcomes depend on process discipline, human capability, and purposeful technology deployment. In healthcare, AI-assisted early warning systems integrated with standardized sepsis protocols, including the Hour-1 Bundle, have produced clinically meaningful reductions in sepsis-related mortality. In manufacturing, IoT-enabled predictive maintenance, digital traceability systems, and robotic automation have reduced unplanned downtime, improved yield, and strengthened supply chain resilience. In education, adaptive AI platforms and hybrid learning models have improved outcomes for underserved populations by enabling personalized, student-centered learning at scale. In neurology, wearable monitoring and machine learning models are enabling earlier detection of mild cognitive impairment, while integrated care platforms are reducing fragmentation across dementia care providers. Generative AI and digital twins represent the next frontier, with applications across clinical decision support, autonomous production, and knowledge work already demonstrating measurable productivity gains. Transparent governance frameworks will be essential to ensure these advances are deployed responsibly, equitably, and with clear accountability.</p> Swapna Chimanchodkar Copyright (c) 2026 http://creativecommons.org/licenses/by-sa/4.0 2026-04-06 2026-04-06 14 1s 141 148 Human-in-the-Loop UI Design: Evaluating Co-Creation with Generative AI Tools https://ijisae.org/index.php/IJISAE/article/view/8142 <p>The use of generative artificial intelligence in user interface design is changing the way people and AI work together, making processes more efficient but also creating some challenges in how to implement it. Human-in-the-Loop UI design is a socio-technical approach that sees AI as a partner rather than a replacement for human knowledge. This approach necessitates careful integration of technological capabilities, human cognitive processes, and organizational constraints. The evaluative framework created includes metrics for semantic fidelity, design system compliance, cognitive load assessment, and trustworthiness that measure both technical performance and how well people work together. Implementation challenges include technical issues like unclear meanings and inconsistent visuals; concerns about people relying too much on technology and losing skills; and complications within organizations related to rules and responsibilities. The article shows that successfully using HITL relies on clear strategies to reduce problems, such as setting design rules, creating easy-to-understand AI interfaces, having ongoing human supervision, and providing thorough training. Enterprise-specific factors include the need for accurate data visualization, meeting accessibility standards, and ensuring security. These factors require special evaluation methods that combine numbers with personal opinions. The framework highlights the importance of keeping human creativity intact while using AI to improve efficiency by carefully assigning tasks and checking results. Effective collaboration models include AI-suggestive systems where artificial intelligence provides recommendations while humans maintain decision-making authority. Structured template approaches offer another viable model that balances creative exploration with organizational governance requirements. The socio-technical perspective reveals that advanced technology alone cannot guarantee implementation success. Organizations must also address human factors considerations and assess organizational readiness for integrating AI capabilities into existing design workflows.</p> Sonali Priya Copyright (c) 2026 http://creativecommons.org/licenses/by-sa/4.0 2026-02-14 2026-02-14 14 1s 149 159 Hybrid Classical–Quantum Optimization of Wireless Routing Using QAOA and Quantum Walks https://ijisae.org/index.php/IJISAE/article/view/8149 <p>Routing in wireless communication networks is shaped by mobility, interference, congestion, and competing service requirements, making route selection a high-dimensional constrained optimization problem rather than a simple shortest path task. This paper investigates the use of hybrid classical–quantum methods for wireless routing, focusing on the Quantum Approximate Optimization Algo-<br />rithm (QAOA) and quantum walks as candidate mechanisms for exploring complex routing spaces. The paper examines how wireless routing can be expressed as a constrained graph optimization problem in which routing objectives, flow constraints, connectivity requirements, and interference effects are mapped into quantum-compatible Hamiltonian representations. It then discusses how these approaches can be integrated into a hybrid architecture in which classical systems perform network monitoring, graph construction, pre-processing, and deployment, while quantum subroutines are used for selected optimization components. The analysis shows that the potential value of quantum routing lies primarily in the treatment of difficult combinatorial subproblems rather than end-to-end replace-<br />ment of classical routing frameworks. The paper also highlights practical limitations arising from state preparation, constraint encoding, oracle construction, hardware noise, limited qubit resources, and hybrid execution overhead. It is argued that any meaningful near-term advantage will depend on careful problem decomposition, compact encoding, and tight classical–quantum integration.</p> <p>DOI: <a href="https://doi.org/10.17762/ijisae.v14i1s.8149">https://doi.org/10.17762/ijisae.v14i1s.8149</a></p> Eric Howard Copyright (c) 2026 http://creativecommons.org/licenses/by-sa/4.0 2026-03-28 2026-03-28 14 1s 160 182 Probabilistic Attribution Models for Digital Out-of-Home Advertising: A Design Science Approach to Bridging Physical Exposure and Digital Behavior https://ijisae.org/index.php/IJISAE/article/view/8163 <p>However‚ since no end-user interaction such as a click or impression exists within a public digital Out-of-Home advertising environment‚ the article presents a probabilistic attribution framework for linking offline advertisement exposures to observable end-user digital behavior through defined geographical regions of exposure․ Using DSR methodology‚ we construct and validate a spatial-temporal modeling framework that utilizes geolocation signals‚ sensor data harvested from devices of subjects and privacy-aware inference algorithms․ Within this framework‚ a probabilistic viewability fence concept introduces spatial and temporal constraints on the inferred exposure while employing quality filters including dwell time‚ device orientation‚ and movement patterns․ Comparative validation with attribution modeling against benchmarks set by location-based and machine learning models shows that multi-dimensional probabilistic exposure inference is applicable and effective․ The framework thus turns DOOH into an accountable advertising medium‚ as opposed to the pure brand building medium‚ allowing cross-channel comparison of campaigns and data-driven actions by marketers․ This article contributes to the probabilistic attribution theory for non-interactive environments and gives a practical architecture to connect sample-based behavior offline and online․</p> Muthupalaniappan Ramanathan Copyright (c) 2026 http://creativecommons.org/licenses/by-sa/4.0 2026-03-20 2026-03-20 14 1s 183 195 Retail Data Engineering as a Fraud & Security Control Plane: A Reference Architecture and Design Patterns https://ijisae.org/index.php/IJISAE/article/view/8164 <p>Data engineering is increasingly a frontline security capability in retail and CPG because fraud detection, incident investigation, and compliance reporting depend on trustworthy, timely, and attributable data. This article makes three contributions. First, it defines a domain-specific reference architecture for retail data engineering—ingestion, storage, processing, serving, and governance—explicitly mapping each layer to control objectives such as integrity, auditability, privacy, and resilience. Second, it formalizes five canonical design patterns (loyalty personalization, multi-touch attribution, inventory automation, enterprise financial migration, and executive reporting) and specifies the operational controls needed in each pattern, including data contracts, identity resolution, and tiered latency. Third, it synthesizes empirical evidence from prior literature to show repeatable outcomes while clarifying the trade-offs between latency, cost, interpretability, and audit requirements. The result is a prescriptive, security-aware blueprint that helps practitioners design retail data platforms that are not only scalable but defensible.</p> Vikas Sripathi Copyright (c) 2026 http://creativecommons.org/licenses/by-sa/4.0 2026-03-18 2026-03-18 14 1s 196 211 Reliable Multimodal AI for Structured Knowledge Extraction and Study Material Generation in Real Classrooms: A Transparent Scoping Survey, Taxonomy, Benchmarks, and Research Roadmap https://ijisae.org/index.php/IJISAE/article/view/8165 <p>Educational knowledge in real classrooms is distributed across speech, slides, whiteboards, handwritten mathematics, code, and ad hoc diagrams. This makes accurate and persistent study support difficult even when recordings are available. Recent multimodal models and large language model (LLM) systems can summarize lectures and generate notes, but real deployment remains limited by alignment drift, OCR and ASR noise, incomplete extraction of formal STEM content, and hallucinations that can silently corrupt study artifacts. This paper presents a transparent scoping survey of a balanced 100-paper corpus organized into five clusters: multimodal lecture understanding, educational artifact generation, structured knowledge extraction, reliability and hallucination control, and benchmarks and evaluation. We explicitly treat the last two clusters as a transfer toolkit layer for classroom AI rather than as classroom-native systems. Beyond synthesis, the paper contributes: (1) a review protocol with an explicit audit trail and descriptive-count caveats; (2) a reliability-first classroom pipeline in which alignment is the operational core; (3) an operational intermediate representation (IR) with typed fields, evidence granularity, verification records, and abstention behavior; (4) a worked micro-example that carries a 30-second lecture snippet into evidence-linked flashcards; (5) a lecture-grounded versus resource-grounded verification matrix; and (6) a reviewer-ready multimodal faithfulness protocol for mixed evidence such as noisy board crops, OCR, and ASR. The result is a sharper, more operational roadmap for trustworthy classroom AI.</p> Soma Kiran Kumar Nellipudi Copyright (c) 2026 http://creativecommons.org/licenses/by-sa/4.0 2026-03-18 2026-03-18 14 1s 212 251 Five Critical Mistakes Organizations Make When Implementing Data Mesh https://ijisae.org/index.php/IJISAE/article/view/8166 <p>The new architectural model of data mesh is poorly understood and implemented in many organizations. The article describes five primary pitfalls of a data mesh transformation. The pitfalls are rooted in (1) mistaken perception of data mesh as a technology migration instead of a model shift for organizational change, (2) centralized ownership structures while supporting domain ownership, (3) lack of platform enablement for data products and self-service, (4) absence of data product contracts and interoperability agreements, and (5) weak federated governance and accountability models. These drawbacks have in common that they don't take into account that data mesh is a socio-technical change, requiring systemic change to organizational design, decision rights, culture, and governance. The article then shows how misalignment of technical adoption and organizational design substantially reduces return on investment and results in domains without genuine autonomy. Inadequate self-service platforms with high cognitive and technical overhead for domain teams, as well as a lack of interoperability standards, result in exponentially increasing integration costs as the number of domains increases. This article describes an ideal design comprising aligned organization, true decentralization, effective self-service platforms, federated contracts, and balanced governance for independence and accountability in exactly the right ways. It makes the case that by using this approach and avoiding the main problems, businesses can transform data from a centralized asset into a product capability that can be effectively dispersed throughout the organization and then used much more strategically.</p> Naveena Kumari Nandale Vadlamudi Copyright (c) 2026 http://creativecommons.org/licenses/by-sa/4.0 2026-03-18 2026-03-18 14 1s 252 260 Lineage, Traceability, and Reproducibility as Reliability Requirements in Enterprise AI Systems https://ijisae.org/index.php/IJISAE/article/view/8170 <p>Artificial intelligence is being applied to key business and compliance choices by more systems in the enterprise. One of the most common systems is concerned with the accuracy of the model and does not factor in the reliability aspect, like the lineage or traceability, or reproducibility. In this paper, we obtain these three aspects as fundamental reliability expectations of enterprise AI. The study was a real enterprise AI applied in 12 months with a before and after quantitative design. Lineage coverage, version control and reproducibility controls were introduced thus, the lineage coverage rose to 0.91 and the success of reproducibility rose to 92% after these tools were applied on structured lineage. Rapid time to incident investigation was less by 66%, audit preparation was also less by 62% and compliance findings were also less by 75%. Monte Carlo simulation also indicated that the risk variability was smaller when the lineage controls had been incorporated. This observation is in full agreement with the results that indicated that integrating lineage, traceability, and reproducibility into AI platforms enhances reliability, audit readiness, and trust in AI results.</p> Divya Bonthala Copyright (c) 2026 http://creativecommons.org/licenses/by-sa/4.0 2026-04-15 2026-04-15 14 1s 261 269 Efficient Incremental Data Modeling in Apache Iceberg-Based Analytical Pipelines: Partitioning and Snapshot Optimization Strategies https://ijisae.org/index.php/IJISAE/article/view/8171 <p>Lakehouse relies on Apache Iceberg to efficiently handle big data analytics in a reliable and scala-able way. But inefficient incremental modeling has the capacity of decreasing the speed of queries and hiking the cost of storage in the long run. This paper gives a quantitative assessment of the partitioning and snapshot retention and compaction policies in terms of Monte Carlo simulations. Findings indicate that scans shrink percentage was increased day to day using partitioning (0.61 to 0.82) and reaction savings were decreased (18.4 seconds to 13.4 seconds). Snapshot expiration policies decreased metadata to data ratio (0.18 to 0.07) and reduced the overall query response (19.3 seconds to 15.8 seconds). Threshold based and daily compaction ensured that average file sizes were above 240 MB and overall efficiency score increased by 0.032 as compared to 0.051. Connected optimization minimized the overall latency by 34 per cent and storage fragmentation by 41 per cent. The results offer viable suggestions in the development of robust and viable Iceberg analytical pipelines.</p> Guruprasad Raghothama Rao Copyright (c) 2026 http://creativecommons.org/licenses/by-sa/4.0 2026-04-15 2026-04-15 14 1s 270 277 From Requirements to Resilience: Architecting a Digital Thread Across Engineering and Supply Chain Using MBSE and PLM https://ijisae.org/index.php/IJISAE/article/view/8172 <p>Modern engineering enterprises invest heavily in CAD environments and PLM platforms, yet supply chains continue to fail at the point where design decisions meet operational execution. The root cause is rarely a logistics breakdown — it is an architectural one. Most enterprises begin the digital thread in CAD, after system intent has already been established informally, without structured traceability. The absence of Model-Based Systems Engineering (MBSE) at the origin of this thread means that requirements, functional allocations, and supply chain constraints never enter the product lifecycle in a machine-readable, queryable form. By the time geometry is committed, the decisions behind it are invisible to any governance mechanism. This paper proposes an enterprise architecture blueprint that repositions MBSE as the authoritative anchor of the digital thread, establishes a formal Semantic Mapping Framework to bridge the logical-to-physical boundary between MBSE and CAD, and uses PLM as the backbone that synchronizes both layers across the full product lifecycle. A RACI-based governance model enforces data ownership at every thread boundary. A five-level Digital Thread Maturity Model provides a structured adoption roadmap. The central argument is that supply chain resilience cannot be achieved operationally when it has not first been built architecturally — and that architecture begins in MBSE, not in CAD.</p> <p>DOI: <a href="https://doi.org/10.17762/ijisae.v14i1s.8172">https://doi.org/10.17762/ijisae.v14i1s.8172</a></p> Jasleen Singh Saini Copyright (c) 2026 http://creativecommons.org/licenses/by-sa/4.0 2026-03-25 2026-03-25 14 1s 278 284 Human-AI Collaborative Architecture for Enterprise Financial Platforms https://ijisae.org/index.php/IJISAE/article/view/8173 <p>Co-branded credit card platforms combine high-volume consumer software with stringent financial regulation, creating architectural challenges that standard design approaches cannot adequately address. This paper presents a human-AI collaborative architecture built around five interlocking design commitments: an event-driven core that captures every state transition as an immutable, replayable domain event; regulation-aware caching that restricts sensitive data domains to narrow read surfaces; cryptographic boundaries with key isolation scoped to the service and regulatory domain; a Zero Trust posture that enforces continuous authentication on every inter-service request; and a tiered human-AI collaboration model that is policy-governed rather than autonomous. The central argument is that compliance is not an external control overlay but a first-class structural property of data models, service boundaries, and event schemas from the outset of design. The resulting platform demonstrates that regulatory requirements and platform innovation are structurally complementary when encoded from the beginning of the architecture.</p> <p>DOI: <a href="https://doi.org/10.17762/ijisae.v14i1s.8173">https://doi.org/10.17762/ijisae.v14i1s.8173</a></p> Ravindra Rajasekhar Kavuru Copyright (c) 2026 http://creativecommons.org/licenses/by-sa/4.0 2026-03-25 2026-03-25 14 1s 285 294 Leveraging AI-Driven Predictive Analytics for Effective Program Management in Retail Supply Chains: A Program Manager's Perspective https://ijisae.org/index.php/IJISAE/article/view/8174 <p>Large retail firms are now leveraging AI-driven PA tools to improve demand forecasting, inventory routing, workforce planning, and disruption recovery. In this article, we will discuss how PA tools can be effectively incorporated into retail SCM program management from a software technology program manager’s perspective, highlighting that forecast accuracy improves much faster than organizational decision adoption, thus making change management a critical success factor. This is because production readiness is built on the foundation of data observability, stress validation, and human-in-the-loop governance. Sustainability is achieved through the recognition of the importance of treating predictive analytics as an end-to-end solution, as opposed to an island-like solution. The article discusses the challenges and provides ways to mitigate them. The article discusses recoverability‑optimized architectures, curated feature stores, shadow testing, and confidence‑based overrides. Governance with clear decision rights and escalations/compliance is highlighted as a requirement to scale predictive analytics in various retail operational contexts. By leveraging technical innovation and program management discipline, predictive analytics can be integrated into retail supply chains as a strategic element. The insights provided here are intended to serve as a roadmap for program managers to effectively integrate technical innovation with organizational realities to improve service levels, reduce costs, and improve supply chain resiliency in a dynamic retail environment.</p> <p>DOI: <a href="https://doi.org/10.17762/ijisae.v14i1s.8174">https://doi.org/10.17762/ijisae.v14i1s.8174</a></p> Cijin Lonappan Kappani Copyright (c) 2026 http://creativecommons.org/licenses/by-sa/4.0 2026-03-25 2026-03-25 14 1s 295 303 A Governance-First and Systems-Theoretic Framework for Scalable Enterprise Cloud Integration Architecture https://ijisae.org/index.php/IJISAE/article/view/8175 <p>Enterprise cloud integration has traditionally been approached as a collection of discrete interfaces and data pipelines connecting heterogeneous systems. Such linear integration models are effective at a limited scale, but they might fail when it comes to nonlinear behavior, feedback effects and governance risks, which grow with the growth of enterprise complexity. This article explains enterprise cloud integration architecture as a complex adaptive system made of interacting and evolving subsystems, state dependencies, and governance boundaries. The article proposes a governance-first systems-theoretic framework that focuses on formal separation of control and data planes, embedded compliance and security mechanisms, predictive scalability modeling, and observability-driven feedback mechanisms. By treating governance as a stabilizing architectural invariant rather than a reactive constraint, the framework enables sustainable scalability, resilience, and long-term adaptability in enterprise cloud ecosystems.</p> <p>DOI: <a href="https://doi.org/10.17762/ijisae.v14i1s.8175">https://doi.org/10.17762/ijisae.v14i1s.8175</a></p> Chakra Dhari Gadige Copyright (c) 2026 http://creativecommons.org/licenses/by-sa/4.0 2026-02-25 2026-02-25 14 1s 304 319 Secured Credit Cards: A Strategic Partnership Model for Financial Inclusion and Customer Development https://ijisae.org/index.php/IJISAE/article/view/8179 <p>This article examines the secured credit card market as a critical gateway to financial inclusion for approximately 45 million credit-invisible Americans and proposes a transformative partnership model between financial institutions and their secured card customers. The article identifies significant gaps in the current industry approach, which treats secured cards primarily as risk mitigation tools rather than customer development opportunities, resulting in minimal educational support and high attrition rates. Through an analysis of customer demographics, industry limitations, and behavioral economics principles, this article advocates for a collaborative framework that positions banks as trusted advisors in their customers' credit-building journeys. The proposed partnership model incorporates comprehensive onboarding programs, behavioral nudging techniques, milestone-based rewards, and strategic partnerships with employers and community organizations. By shifting from transactional to relationship-based approaches, financial institutions can create aligned incentives that benefit all stakeholders while addressing fundamental gaps in financial literacy and inclusion. The implementation strategies leverage digital transformation, personalized support systems, and data analytics to create scalable solutions that guide customers toward positive financial behaviors and successful transitions to mainstream credit products.</p> <p>DOI: <a href="https://doi.org/10.17762/ijisae.v14i1s.8179">https://doi.org/10.17762/ijisae.v14i1s.8179</a></p> Avaneendra Kanaparti Copyright (c) 2026 http://creativecommons.org/licenses/by-sa/4.0 2026-04-16 2026-04-16 14 1s 320 326 Offloading Network Policy Enforcement to Data Processing Units https://ijisae.org/index.php/IJISAE/article/view/8180 <p>General-purpose server CPUs in modern data centers bear a dual burden: executing application workloads while simultaneously enforcing network policies. This split responsibility introduces computational overhead, cache contention, and latency variability that degrade both application throughput and network performance. This article examines the architectural case for offloading policy enforcement, connection tracking, firewall operations, and traffic metering to Data Processing Units (DPUs)—purpose-built accelerators integrated directly into the network data path. By relocating these functions from host CPUs to dedicated silicon, organizations recover substantial compute headroom while achieving deterministic, sub-microsecond network performance. The article analyzes the bottlenecks of CPU-based network processing, the architectural design of modern DPUs, the role of open standards in enabling portable policy management, and the operational benefits across diverse deployment scenarios. Results demonstrate measurable gains in resource utilization, energy efficiency, and latency consistency for latency-sensitive workloads, establishing hardware-accelerated network processing as a foundational shift in data center architecture.</p> <p>DOI: <a href="https://doi.org/10.17762/ijisae.v14i1s.8180">https://doi.org/10.17762/ijisae.v14i1s.8180</a></p> Satya Sagar Reddi Copyright (c) 2026 http://creativecommons.org/licenses/by-sa/4.0 2026-02-14 2026-02-14 14 1s 327 338 Convergence of AI and Zero Trust: Enabling Continuous Verification Across Hybrid Cloud Environments https://ijisae.org/index.php/IJISAE/article/view/8181 <p>Contemporary organizations confronting sophisticated threat actors across distributed hybrid cloud environments cannot maintain the velocity required for continuous verification of millions of daily authentication decisions through manual security operations. Artificial intelligence integration within Zero Trust frameworks enables operationally viable continuous verification across hybrid cloud infrastructures through systematic literature synthesis and conceptual framework development. Four contributions address existing gaps: (1) five-layer reference architecture explicitly integrating AI components (data collection, analytics, policy decision, enforcement, orchestration) with Zero Trust pillars across hybrid cloud platforms, (2) three-phase implementation framework with quantified metrics synthesized from eight documented enterprise deployments, (3) cross-sectoral deployment analysis across five industries with operational KPIs, (4) evidence-based mitigation strategies validated through expert consensus with twelve chief information security officers. Synthesized findings demonstrate measurable improvements detailed in Section VI, including significant reductions in misconfiguration incidents, detection time improvements, automated incident response capabilities, and substantial operational savings. Cross-sectoral results reveal industry-specific improvements ranging from 30-75% across manufacturing, financial services, healthcare, retail, and energy sectors. The integrated framework addresses documented gaps in AI-Zero Trust technical architectures for hybrid cloud continuous verification, providing actionable implementation guidance for organizations transitioning from perimeter-based defenses to AI-powered continuous authentication and authorization systems.</p> <p>DOI: <a href="https://doi.org/10.17762/ijisae.v14i1s.8181">https://doi.org/10.17762/ijisae.v14i1s.8181</a></p> Barinder Pal Singh Copyright (c) 2026 http://creativecommons.org/licenses/by-sa/4.0 2026-04-15 2026-04-15 14 1s 339 – 359 339 – 359 Hallucination Is a Retrieval Problem: Diagnosing Structural Confabulation in LLMs and a Path Forward via Grounded Belief Representations https://ijisae.org/index.php/IJISAE/article/view/8182 <p>Hallucination in large language models (LLMs), the confident generation of factually incorrect or unsupported content, remains one of the most consequential unsolved problems in the field. Despite an enormous volume of empirical work, the community lacks a mechanistic consensus on why models hallucinate even when ground-truth information resides in training corpora. This article argues that hallucination is fundamentally a retrieval failure, not a knowledge failure: the parametric weights encode sufficient information, but the inference-time process of locating and conditioning on that information is unreliable. This framing redirects blame from the knowledge store toward the access mechanism and suggests that retrieval-augmented approaches are not merely useful patches but are architecturally necessary. Four structural limits of the dominant decoder-only transformer paradigm are diagnosed: superposition-induced interference, attention dilution in long contexts, RLHF overconfidence calibration, and benchmark saturation that together explain why scaling alone cannot resolve confabulation. Three concrete research directions are then proposed: (1) Belief-Grounded Decoding, which separates knowledge retrieval from language generation via an explicit epistemic state; (2) Structured Knowledge Integration for RAG, replacing flat retrieved text with relational subgraphs; and (3) Domain-Divergent Hallucination Benchmarks that test generalization across knowledge-distribution shift. Minimal proof-of-concept experiments executable within 12–18 months are outlined, and the critical failure modes of the proposed approaches are identified.</p> <p>DOI: <a href="https://doi.org/10.17762/ijisae.v14i1s.8182">https://doi.org/10.17762/ijisae.v14i1s.8182</a></p> Sai Manoj Jayakannan Copyright (c) 2026 http://creativecommons.org/licenses/by-sa/4.0 2026-04-15 2026-04-15 14 1s 360 372 AI-Optimized Real-Time Decision Systems for Digital Advertising https://ijisae.org/index.php/IJISAE/article/view/8183 <p>Real-time bidding architectures powering programmatic advertising face simultaneous demands across latency, privacy, and decision quality that no single prior system has addressed within a unified engineering framework. The deprecation of third-party cookies, platform-level tracking restrictions, and evolving data protection regulation under GDPR have fundamentally altered the identity infrastructure that behavioral targeting depends upon, while exchange-imposed rigorous deadlines continue to constrain every component of the serving pipeline. Four concrete contributions are presented: a sub-50 ms AI inference pipeline built on distributed edge caching and SLO-aware gradient-boosted scoring; a federated identity framework achieving privacy-compliant personalization through rotating session tokens and cohort-based identifiers; a hybrid multi-agent reinforcement learning and large language model bidding optimizer delivering substantial revenue improvement over rule-based baselines; and a systematic experimental evaluation framework reporting latency, throughput, and CTR prediction accuracy synthesized from peer-reviewed production-scale benchmarks. End-to-end P95 latency remains within the exchange deadline at production DSP throughput, CTR prediction AUC reaches 0.776 for gradient-boosted models, and coordinated multi-agent RL bidding achieves 19,501 CNY platform revenue versus 5,347 CNY for hand-crafted rules. Zero-knowledge verification mechanisms address the measurement attribution gap introduced by identifier deprecation, while legally grounded privacy design satisfies GDPR requirements as system properties rather than post-hoc compliance overlays.</p> <p>DOI: <a href="https://doi.org/10.17762/ijisae.v14i1s.8183">https://doi.org/10.17762/ijisae.v14i1s.8183</a></p> Sai Dheeraj Guntupalli Copyright (c) 2026 http://creativecommons.org/licenses/by-sa/4.0 2026-04-15 2026-04-15 14 1s 373 383 AI-Driven Dynamic Pricing, Fee Optimization, and Incentive Intelligence Across the Transaction Lifecycle https://ijisae.org/index.php/IJISAE/article/view/8184 <p>Payment processors have historically relied on static billing models and broad merchant segmentation, creating structural inefficiencies in an increasingly dynamic digital commerce environment. Transaction-level costs, risks, and strategic value vary materially with context—channel, geography, funding source, payout timing, merchant behavior, and dispute outcomes—yet legacy pricing systems treat these dimensions as uniform. This article presents a modern pricing architecture that transforms the pricing engine into a real-time economic decision layer, combining transaction-level cost and loss forecasting, competitive and elasticity-aware optimization, continuous post-settlement learning, and an integrated incentive layer for promotions and merchant-funded campaigns. The platform employs machine learning for predictive components and large language models for unstructured signal extraction, enabling a pricing system that remains auditable, adaptive, and aligned with long-term network health. Implementation through governed architectural layers, deterministic fee construction with explainable components, event-driven lifecycle data contracts, and closed-loop learning mechanisms demonstrate how economic precision and transparency can coexist. Evaluation methods combining controlled experimentation, causality validation, and lifecycle measurement ensure that pricing decisions improve both processor profitability and merchant experience without sacrificing either regulatory compliance or competitive positioning.</p> <p>DOI: <a href="https://doi.org/10.17762/ijisae.v14i1s.8184">https://doi.org/10.17762/ijisae.v14i1s.8184</a></p> Satheesh Kumar Kumara Chinnaian Copyright (c) 2026 http://creativecommons.org/licenses/by-sa/4.0 2026-04-15 2026-04-15 14 1s 384 400 Cloud Modernization and High Availability Architecture: Strategic Foundations for Enterprise Digital Transformation https://ijisae.org/index.php/IJISAE/article/view/8185 <p>Database infrastructure modernization and cloud migration have become one of the main calculated initiatives for most organizations today. This is largely due to the need to remain competitive in the digital age, increase operational resilience, and implement strong business continuity. This article explains the principles of migrating from an on-premises database system to a cloud-native database solution with high availability, redundancy, automated failover, and a distributed architecture. To achieve 99.999 percent uptime and smooth scale, improved resiliency, and efficiency, organizations will need to adopt architectural elements such as microservices, infrastructure as code, and a cloud-native approach; deliberate migration programs; and structured planning approaches to migration and modernization (including discovery, assessment, prioritization, optimization, and operational excellence), such as the six Rs and cloud adoption frameworks from cloud service providers. Modern cloud environments (public, private, and hybrid) provide distributed computing resources while guaranteeing high availability for mission-critical systems. For example, technologies such as active data guard, zero-downtime migration strategies, real application clusters, and automated disaster recovery can be used to deliver near-zero downtime and scalability in highly available systems. Container orchestrators, elastic scaling processes, tiered storage mechanisms, observability, and all other components of a data infrastructure form an increasingly flexible and scalable platform that supports the accelerating growth of businesses and innovation across the world. Data infrastructure literacy, secure infrastructure-as-code, and continuous optimization are keys to maintaining this competitive advantage, as they help to increase the reliability and automated recovery of business-critical processes.</p> <p>DOI: <a href="https://doi.org/10.17762/ijisae.v14i1s.8185">https://doi.org/10.17762/ijisae.v14i1s.8185</a></p> Rajesh Kumar Balusu Copyright (c) 2026 http://creativecommons.org/licenses/by-sa/4.0 2026-04-15 2026-04-15 14 1s 401 407 The Algorithmic Engine of American Resurgence: Catalyzing Labor Productivity through AI-First ERP Orchestration https://ijisae.org/index.php/IJISAE/article/view/8186 <p>Something structural has shifted in the global economy, and it is not just about technology getting faster. Labor shortages are biting in ways that feel permanent rather than cyclical. Workforces are aging. And despite enormous investment in digital infrastructure, American businesses are not getting the productivity returns that investment was supposed to generate. The gap between what enterprise technology promises and what organizations actually extract from it has become one of the more costly open problems in modern business—and closing it requires more than upgrading software. It requires rethinking the architecture entirely. Autonomous Resource Orchestration (ARO) aims to change how we use Enterprise Resource Planning (ERP) by making it an active system that connects Human Capital Management (HCM) platforms with Financial Management Systems (FMS) using a built-in generative AI layer, which can manage resources, identify problems, and start workflows instantly without needing a manager's input. Comparative evidence across smart factory and knowledge-intensive service environments suggests the productivity lift is real and substantial—administrative time drops sharply, workforce reallocation that once took weeks happens in hours, internal talent mobility triples, and forecasting accuracy tightens to a degree that changes how confidently organizations can plan. None of these improvements requires replacing workers. It requires stopping the waste of their time on tasks that systems should handle automatically—and redirecting that recaptured capacity toward the creative, relational, high-judgment work that actually drives growth.</p> <p>DOI: <a href="https://doi.org/10.17762/ijisae.v14i1s.8186">https://doi.org/10.17762/ijisae.v14i1s.8186</a></p> <p><strong><em><br /><br /></em></strong></p> Srikanth Gadde Copyright (c) 2026 http://creativecommons.org/licenses/by-sa/4.0 2026-04-15 2026-04-15 14 1s 408 415 Integration of Autonomous Artificial Intelligence within Established Enterprise Resource Planning Financial Infrastructure https://ijisae.org/index.php/IJISAE/article/view/8187 <p>Today’s enterprise artificial intelligence is changing a lot, moving from just answering questions to becoming self-sufficient agents that can observe their surroundings, make plans, and carry out tasks. This academic study examines the integration of autonomous AI with existing Enterprise Resource Planning systems, focusing on the monitoring of financial transactions and regulatory frameworks. The article covers basic ideas about how autonomous systems work, ways to combine them with ERP systems, methods for continuous learning, and the management structures needed for them to operate independently in regulated financial settings. Through observation of current processes and developing implementation configurations, this scholarship reveals pathways toward anticipatory financial supervision infrastructures that amplify rather than substitute human discernment. This scholarship, by analyzing current research and emerging implementation models, identifies routes for developing proactive financial oversight systems that enhance, rather than replace, human judgment. The metamorphosis from conventional batch-oriented analytics toward instantaneous, occurrence-activated agent implementation signifies a revolutionary transformation in how establishments administer intricate operational workflows. Key improvements include using layered agent setups, advanced learning to change deceptive patterns, learning from human reactions, and ERP systems that work across different platforms, which are the technical foundations for practical use. While obstacles endure in interpretability, synchronization, and institutional acceptance, the coalescence of numerous technological progressions renders autonomous AI amalgamation both practicable and progressively imperative for sustaining productive fiscal regulations at magnitude.</p> <p>DOI: <a href="https://doi.org/10.17762/ijisae.v14i1s.8187">https://doi.org/10.17762/ijisae.v14i1s.8187</a></p> Pradeep Narayanan Copyright (c) 2026 http://creativecommons.org/licenses/by-sa/4.0 2026-04-15 2026-04-15 14 1s 416 426 The Metrology Imperative: The Necessity of Robust Evaluation Frameworks and Comprehensive Automated Judges in Generative AI https://ijisae.org/index.php/IJISAE/article/view/8188 <p>Across the past several years, the accelerating advancement of Large Language Models (LLMs) and generative artificial intelligence has quietly produced a crisis that much of the field has been slow to name directly—a breakdown in the ability to evaluate what these systems can and cannot actually do. Traditional, static benchmarking methodologies have proven structurally inadequate, collapsing under the combined weight of rapid benchmark saturation, pervasive data contamination, and the kind of systematic overfitting that emerges whenever commercial incentives are tied too tightly to leaderboard rankings. This brief argues, with considerable urgency, that building robust and dynamic evaluation frameworks alongside sophisticated automated judges—most prominently through the LLM-as-a-Judge paradigm—is not an optional enhancement to existing practices but an absolute prerequisite for the continued, safe, and value-aligned development of AI systems. Through a careful examination of where current evaluation practices fail, an analysis of the architectural requirements governing automated multi-agent juries, and a survey of multi-dimensional safety assessment approaches, a coherent pathway toward genuinely reliable AI metrology is charted here. The arguments and architectural outlines presented across these sections are intended to serve as a structured foundational blueprint for a full-length 40-page journal article that will pursue the theoretical, empirical, and architectural dimensions of this problem in considerably greater depth.</p> <p>DOI: <a href="https://doi.org/10.17762/ijisae.v14i1s.8188">https://doi.org/10.17762/ijisae.v14i1s.8188</a></p> Ankur Partap Kotwal Copyright (c) 2026 http://creativecommons.org/licenses/by-sa/4.0 2026-04-15 2026-04-15 14 1s 427 435 Standing Out Early in Enterprise Web Engineering: Practical Ownership Strategies for Platform and API Professionals https://ijisae.org/index.php/IJISAE/article/view/8189 <p>The API economies and distributed platform ecosystems that are a natural outcome of cloud-native applications have changed the baseline competencies for entry-level enterprise web engineering talent․ Programming is expected as a threshold skill․ Engineers who can show delivery maturity‚ production responsibility‚ and cross-functional collaboration are some of the most sought after․ There is a gap between the task-oriented focus of current technical training and the ownership mindset needed to succeed in modern engineering culture․ This article proposes a set of concrete frameworks for early-career engineers‚ spanning ownership‚ engineering quality‚ observability‚ experimentation‚ and inclusive platform delivery․ We discuss how pro-active problem solving, design-aware delivery, telemetry-driven development, and accessibility-first engineering work together to enable early-career engineers to deliver enduring value. In an environment of saturated hiring markets for engineering talent, the competitive edge for technical excellence increasingly appears to be the disciplined excellence across the end-to-end delivery lifecycle.</p> <p>DOI: <a href="https://doi.org/10.17762/ijisae.v14i1s.8189">https://doi.org/10.17762/ijisae.v14i1s.8189</a></p> Dreema Patel Copyright (c) 2026 http://creativecommons.org/licenses/by-sa/4.0 2026-04-15 2026-04-15 14 1s 436 443 Conversational AI Agents for Financial Operations with Escalation-Aware Handoff Protocols: Designing Intelligent Human-AI Collaboration Systems https://ijisae.org/index.php/IJISAE/article/view/8190 <p>Conversational artificial intelligence (AI) provides a model shift from deterministic rule-based process automation to context-aware, always-on learning systems for financial operations. Toward that goal, this article presents a framework for escalation-aware conversational AI in financial operations, including a multi-dimensional signal architecture that leverages linguistic, behavioral, transactional, and relationship signals to make real-time, probabilistic escalation decisions for customers and service agents of financial institutions. Another key concept is the collaboration zone, where artificial intelligence and a human agent are processing in parallel, having distinct skills, and there is no explicit handoff of control between the agents. The curriculum builds on the human agents' reasoning to discover human-like reasoning paths and extend the AI competency frontier. It uses a high rate of automation while also ensuring highly satisfactory customer experiences similar to those of human agents. Other considerations include implementation architecture; the transformation of the workforce; QA and continuous improvement operations; as well as quests for proactive engagement, multimodal interaction, and federated learning; as well as the evolution of autonomous agents.</p> <p>DOI:<a href="https://doi.org/10.17762/ijisae.v14i1s.8190"> https://doi.org/10.17762/ijisae.v14i1s.8190</a></p> Gautham Paspala Copyright (c) 2026 http://creativecommons.org/licenses/by-sa/4.0 2026-04-15 2026-04-15 14 1s 444 455