No Reference Quality Assessment Metric for Multi-spectral and Multi-Modal Image Fusion using Sparse Approximate Variational Autoencoder

Authors

  • Milind S. Patil Research Scholar, All India Shri Shivaji Memorial Society’s Institute of Information Technology, Pune, India.
  • Pradip B. Mane Principal, All India Shri Shivaji Memorial Society’s Institute of Information Technology, Pune, India.

Keywords:

Deep Neural Network (DNN), Sparse Approximate Variational Autoencoder, Quality Assessment Regression Model

Abstract

Unlike natural image quality assessment approaches, satellite stereo images have various quality criteria in different application contexts, making it difficult to develop an appropriate objective evaluation model. The area of perceptual quality evaluation has evolved significantly and continues to expand. In the low-level computer vision field, no reference image quality assessment (NRIQA) is critical. Deep neural networks are gaining popularity for NRIQA applications. Existing deep learning-based systems are generally supervised and depend on an unrealistically huge number of labelled training data. Model-based techniques are unsupervised and flexible, but they depend on handmade priors. The majority of extant No reference image quality assessment (NR-IQA) models were designed for synthetically distorted images, however they perform badly on in-the-wild images, which are frequently used in a variety of practical applications. Blind Image Quality Evaluation Metric for Multi-spectral and Multi-modal Image Fusion Techniques is developed in this research. This No reference quality measure is examined and compared to numerous well-known cutting-edge methods and mean opinion score. The proposed quality evaluation regression models successfully predict quality score. When compared to the MOS score, archives score with 96% similarity. The suggested approach has a Pearson correlation value of 0.96 and a Spearman's rank correlation coefficient of 0.83.To use the abundant self-supervisory information and decrease the model's uncertainty, we impose self-consistency between the outputs of our quality assessment model for each image and its sparse code book. Our results demonstrate that our suggested technique outperforms other methods on Fused image datasets with distorted images.

Downloads

Download data is not yet available.

References

H. R. Sheikh, M. F. Sabir and A. C. Bovik, "A Statistical Evaluation of Recent Full Reference Image Quality Assessment Algorithms," in IEEE Transactions on Image Processing, vol. 15, no. 11, pp. 3440-3451, Nov. 2006, doi: 10.1109/TIP.2006.881959.

S. Bosse, D. Maniry, K. -R. Müller, T. Wiegand and W. Samek, "Deep Neural Networks for No-Reference and Full-Reference Image Quality Assessment," in IEEE Transactions on Image Processing, vol. 27, no. 1, pp. 206-219, Jan. 2018, doi: 10.1109/TIP.2017.2760518.

W. Sun, Q. Liao, J. -H. Xue and F. Zhou, "SPSIM: A Superpixel-Based Similarity Index for Full-Reference Image Quality Assessment," in IEEE Transactions on Image Processing, vol. 27, no. 9, pp. 4232-4244, Sept. 2018, doi: 10.1109/TIP.2018.2837341.

A. Rehman and Z. Wang, "Reduced-Reference Image Quality Assessment by Structural Similarity Estimation," in IEEE Transactions on Image Processing, vol. 21, no. 8, pp. 3378-3389, Aug. 2012, doi: 10.1109/TIP.2012.2197011.

Z. Wang and A. C. Bovik, "Reduced- and No-Reference Image Quality Assessment," in IEEE Signal Processing Magazine, vol. 28, no. 6, pp. 29-40, Nov. 2011, doi: 10.1109/MSP.2011.942471.

J. Wu, W. Lin, G. Shi and A. Liu, "Reduced-Reference Image Quality Assessment With Visual Information Fidelity," in IEEE Transactions on Multimedia, vol. 15, no. 7, pp. 1700-1705, Nov. 2013, doi: 10.1109/TMM.2013.2266093.

A. Mittal, A. K. Moorthy and A. C. Bovik, "No-Reference Image Quality Assessment in the Spatial Domain," in IEEE Transactions on Image Processing, vol. 21, no. 12, pp. 4695-4708, Dec. 2012, doi: 10.1109/TIP.2012.2214050.

T. Virtanen, M. Nuutinen, M. Vaahteranoksa, P. Oittinen and J. Häkkinen, "CID2013: A Database for Evaluating No-Reference Image Quality Assessment Algorithms," in IEEE Transactions on Image Processing, vol. 24, no. 1, pp. 390-402, Jan. 2015, doi: 10.1109/TIP.2014.2378061.

Z.M. Parvez Sazzad, Y. Kawayoke, Y. Horita, No reference image quality assessment for JPEG2000 based on spatial features, Signal Processing: Image Communication, Volume 23, Issue 4, 2008,Pages 257-268, ISSN 0923-5965, https://doi.org/10.1016/j.image.2008.03.005.

A. Der Kiureghian and O. Ditlevsen, “Aleatory or epistemic? does it matter?” Structural safety, vol. 31, no. 2, pp. 105–112, 2009.

M. H. Faber, “On the treatment of uncertainties and probabilities in engineering decision analysis,” 2005.

X. Geng, “Label distribution learning,” IEEE Trans. Knowl. Data Eng., vol. 28, no. 7, pp. 1734–1748, 2016.

X. Geng and L. Luo, “Multilabel ranking with inconsistent rankers,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2014, pp. 3742–3747.

C. Blundell, J. Cornebise, K. Kavukcuoglu, and D. Wierstra, “Weight uncertainty in neural network,” pp. 1613–1622, 2015.

Y. Gal and Z. Ghahramani, “Dropout as a bayesian approximation: Representing model uncertainty in deep learning,” pp. 1050–1059, 2016.

A. Kendall and Y. Gal, “What uncertainties do we need in bayesian deep learning for computer vision?” vol. 30, 2017.

A. Kendall, V. Badrinarayanan, and R. Cipolla, “Bayesian segnet: Model uncertainty in deep convolutional encoder-decoder architectures for scene understanding,” arXiv preprint arXiv:1511.02680, 2015.

J. Chang, Z. Lan, C. Cheng, and Y. Wei, “Data uncertainty learning in face recognition,” pp. 5710–5719, 2020.

F. Kraus and K. Dietmayer, “Uncertainty estimation in one-stage object detection,” in IEEE Trans. Intell. Transp. Syst. Conf. (ITSC). IEEE, 2019, pp. 53–60.

T. Yu, D. Li, Y. Yang, T. M. Hospedales, and T. Xiang, “Robust person re-identification by modelling feature uncertainty,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), 2019, pp. 552–561.

Z. Zhou, F. Zhou and G. Qiu, "Blind Image Quality Assessment based on Separate Representations and Adaptive Interaction of Content and Distortion," in IEEE Transactions on Circuits and Systems for Video Technology, doi: 10.1109/TCSVT.2023.3299328.

Mingdeng Cao, Yanbo Fan, Yong Zhang, Jue Wang, and Yujiu Yang. Vdtr: Video deblurring with transformer. arXiv preprint arXiv:2204.08023, 2022.

Ding Liu, Zhaowen Wang, Yuchen Fan, Xianming Liu, Zhangyang Wang, Shiyu Chang, and Thomas Huang. Robust video super-resolution with learned temporal dynamics. In Proc. of ICCV, 2017.

Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang.

Restormer: Efficient transformer for high-resolution image restoration. arXiv preprint arXiv:2111.09881, 2021

Sheng Yang, Qiuping Jiang, Weisi Lin, and Yongtao Wang.: An end-to-end saliency-guided deep neural network for no-reference image quality assessment. In Proc. of ACM MM, 2019.

X. Ma, S. Zhang, C. Liu and D. Yu, "Bridge the gap between full-reference and no-reference: A totally full-reference induced blind image quality assessment via deep neural networks," in China Communications, vol. 20, no. 6, pp. 215-228, June 2023, doi: 10.23919/JCC.2023.00.023.

L. Yu, J. Li, F. Pakdaman, M. Ling and M. Gabbouj, "MAMIQA: No-Reference Image Quality Assessment Based on Multiscale Attention Mechanism With Natural Scene Statistics," in IEEE Signal Processing Letters, vol. 30, pp. 588-592, 2023, doi: 10.1109/LSP.2023.3276645.

J. Si, B. Huang, H. Yang, W. Lin and Z. Pan, "A no-Reference Stereoscopic Image Quality Assessment Network Based on Binocular Interaction and Fusion Mechanisms," in IEEE Transactions on Image Processing, vol. 31, pp. 3066-3080, 2022, doi: 10.1109/TIP.2022.3164537.

Y. Zhu, Y. Li, W. Sun, X. Min, G. Zhai and X. Yang, "Blind Image Quality Assessment Via Cross-View Consistency," in IEEE Transactions on Multimedia, doi: 10.1109/TMM.2022.3224319.

J. Yang et al., "No Reference Quality Assessment for Screen Content Images Using Stacked Autoencoders in Pictorial and Textual Regions," in IEEE Transactions on Cybernetics, vol. 52, no. 5, pp. 2798-2810, May 2022, doi: 10.1109/TCYB.2020.3024627.

K. Ding, K. Ma, S. Wang and E. P. Simoncelli, "Image Quality Assessment: Unifying Structure and Texture Similarity," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 5, pp. 2567-2581, 1 May 2022, doi: 10.1109/TPAMI.2020.3045810.

Varga, D. “Full-Reference Image Quality Assessment Based on Grünwald–Letnikov Derivative, Image Gradients, and Visual Saliency.” Electronics 2022, 11, 559. https://doi.org/10.3390/electronics11040559

Kamal Lamichhane, Marco Carli, Federica Battisti, “A CNN-based no reference image quality metric exploiting content saliency”, Signal Processing: Image Communication, Volume 111, 2023, 116899, ISSN 0923-5965, https://doi.org/10.1016/j.image.2022.116899.

Alamgeer, S., Farias, M.C. “A two-stream cnn based visual quality assessment method for light field images.” Multimed Tools Appl 82, 5743–5762 (2023). https://doi.org/10.1007/s11042-022-13436-4

Downloads

Published

23.02.2024

How to Cite

Patil, M. S. ., & Mane, P. B. . (2024). No Reference Quality Assessment Metric for Multi-spectral and Multi-Modal Image Fusion using Sparse Approximate Variational Autoencoder. International Journal of Intelligent Systems and Applications in Engineering, 12(17s), 724–732. Retrieved from https://ijisae.org/index.php/IJISAE/article/view/4941

Issue

Section

Research Article

Most read articles by the same author(s)