DeepFusion: A Unified Latent Framework for Cross-Modal Biometric and Behavioral Integrity Verification
Keywords:
Biometric Security, Deep Learning Authentication, Digital Integrity Verification, Identity Verification Systems, Latent Feature Fusion, Multimodal Biometrics, Vision Transformer.Abstract
Ensuring high-fidelity digital identity verification increasingly requires integrating heterogeneous identity signals, including physical biometrics and behavioral telemetry. Traditional systems process modalities independently, limiting their ability to detect sophisticated identity manipulation and adversarial attacks. This research introduces DeepFusion, a unified latent framework for cross-modal fusion that embeds heterogeneous data streams into a shared manifold using a Deep Joint Embedding (DJE) architecture. A gated fusion mechanism dynamically weights each modality based on real-time signal quality, while a triplet-loss-based consistency objective enforces alignment across biometric and behavioral patterns. To enhance adversarial resilience, DeepFusion explicitly detects cross-modal discrepancies indicative of presentation, injection, and synthetic identity attacks. The framework also leverages contrastive and adversarial representation learning to preserve sensitivity to anomalous behaviors while maintaining generalization to previously unseen identity patterns. Empirical evaluation on large-scale multimodal datasets demonstrates substantial improvements in detection precision, reduction in false acceptance rates, and robust cross-domain generalization compared to unimodal and naive fusion baselines. These results establish latent-level multimodal fusion as a scalable, resilient, and high-fidelity methodology for operational identity verification in complex digital ecosystems, offering strong defenses against evolving adversarial threats.
Downloads
References
V. Vandana and N. Kaur, “Analytical review of biometric technology employing vivid modalities,” Int. J. Image Graph., vol. 22, no. 1, p. 2250004, 2022.
A. K. Jain, A. Ross, and K. Nandakumar, Handbook of Multibiometrics. New York, NY, USA: Springer, 2011.
M. Abuhamad, A. Abusnaina, D. Nyang, and D. Mohaisen, “Sensor-based continuous authentication of smartphones’ users using behavioral biometrics: A contemporary survey,” IEEE Internet Things J., vol. 8, no. 1, pp. 65–84, 2020.
S. Kumar and S. Prasanna, “Heterogeneous ensemble learning for robust adversarial pattern recognition in digital ecosystems,” J. Comput. Anal. Appl., vol. 27, no. 5, pp. 18–28, 2019.
A. K. Jain and A. Ross, “Multibiometric systems,” Commun. ACM, 2004.
L. Gudala, A. K. Reddy, A. K. R. Sadhu, and S. Venkataramanan, “Leveraging biometric authentication and blockchain technology for enhanced security in identity and access management systems,” J. Artif. Intell. Res., vol. 2, no. 2, pp. 21–50, 2022.
S. Kumar, S. Prasanna, and X. Ruan, “A unified hybrid machine learning architecture for robust identity anomaly detection in large-scale digital ecosystems,” J. Electr. Syst., vol. 14, no. 1, pp. 160–173, 2018.
C. Zhang, Z. Yang, X. He, and L. Deng, “Multimodal intelligence: Representation learning, information fusion, and applications,” IEEE J. Sel. Top. Signal Process., vol. 14, no. 3, pp. 478–493, 2020.
S. K. S. Prasanna, “GeoDNN: Geometry-aware deep neural networks for cross-domain fingerprint spoof detection,” Int. J. Intell. Syst. Appl. Eng., vol. 6, no. 1, pp. 97–107, Mar. 2018.
M. H. Safavipour, M. A. Doostari, and H. Sadjedi, “A hybrid approach to multimodal biometric recognition based on feature-level fusion of face, two irises, and both thumbprints,” J. Med. Signals Sens., vol. 12, no. 3, pp. 177–191, 2022.
H. Yang, E. Sun, C. Cheng, and A. H. Ding, “Multi-modal biometrics based on data fusion,” in J. Phys.: Conf. Ser., vol. 1684, no. 1, p. 012023, Nov. 2020.
Y. Xin et al., “Multimodal feature-level fusion for biometrics identification system on IoMT platform,” IEEE Access, vol. 6, pp. 21418–21426, 2018.
V. Rajasekar et al., “Enhanced multimodal biometric recognition approach for smart cities based on an optimized fuzzy genetic algorithm,” Sci. Rep., vol. 12, no. 1, p. 622, 2022.
P. Szczuko, A. Harasimiuk, and A. Czyżewski, “Evaluation of decision fusion methods for multimodal biometrics in the banking application,” Sensors, vol. 22, no. 6, p. 2356, 2022.
D. Sadhya and S. K. Singh, “Construction of a Bayesian decision theory–based secure multimodal fusion framework for soft biometric traits,” IET Biometrics, vol. 7, no. 3, pp. 251–259, 2018.
Y. Wang, D. Shi, and W. Zhou, “Convolutional neural network approach based on multimodal biometric system with fusion of face and finger vein features,” Sensors, vol. 22, no. 16, p. 6039, 2022.
S. Tyagi, B. Chawla, R. Jain, and S. Srivastava, “Multimodal biometric system using deep learning based on face and finger vein fusion,” J. Intell. Fuzzy Syst., vol. 42, no. 2, pp. 943–955, 2022.
S. Rajendran et al., “An intelligent face recognition technology for IoT-based smart city application using condition-CNN with foraging learning PSO model,” Int. J. Pattern Recognit. Artif. Intell., vol. 36, no. 14, p. 2256018, 2022.
G. S. Walia, T. Singh, K. Singh, and N. Verma, “Robust multimodal biometric system based on optimal score level fusion model,” Expert Syst. Appl., vol. 116, pp. 364–376, 2019.
C. Kamlaskar and A. Abhyankar, “Iris-fingerprint multimodal biometric system based on optimal feature level fusion model,” AIMS Electron. Electr. Eng., vol. 5, no. 4, pp. 229–250, 2021.
B. Ammour, L. Boubchir, T. Bouden, and M. Ramdani, “Face–iris multimodal biometric identification system,” Electronics, vol. 9, no. 1, p. 85, 2020.
M. H. Hamd and M. Y. Mohammed, “Multimodal biometric system based face–iris feature level fusion,” Int. J. Mod. Educ. Comput. Sci., vol. 11, no. 5, pp. 1–9, 2019.
S. K. S. Prasanna, “DeepSynth: A robust multi-layer neural detection of coordinated latent anomalies in high-dimensional identity systems,” Int. J. Intell. Syst. Appl. Eng., vol. 7, no. 1, pp. 66–77, Mar. 2019.
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.


