Blind Synthetic Image Quality Assessment Using EfficientNet-V2

Authors

  • Yogita Gabhane Department of ECE, IIIT, Nagpur, 441108, Maharashtra, India.
  • Tapan Kumar Jain Department of ECE, IIIT, Nagpur, 441108, Maharashtra, India
  • Vipin Kamble Department of ECE, Visvesvaraya National Institute of Technology, Nagpur, 440010, Maharashtra, India.

Keywords:

Deep convolutional neural network, EfficientNet-V2 S, Blind Image Quality Assessment, TID2013 dataset, transfer learning, SROCC, PLCC

Abstract

We propose a simple and efficient deep convolutional neural network (CNN) model using EfficientNet-V2 S for Blind Image Quality Assessment (BIQA) that works for the distorted images from the TID2013 dataset consisting of images subjected to different varieties and levels of distortion.  The ability of the EfficientNet-V2 S over its base model is effectively used for faster training and lower computational and time complexity. For quality assessment, the image is resized and normalized in such a manner as to retain the information content. The pre-trained model is used to evaluate weights using transfer learning over 80% of the TID2013 datasets while the model is tested on the remaining 20% of the distorted images. The actual scores and the predicted scores are compared based on two correlation coefficients namely Spearman’s Rank Order Correlation Coefficient (SROCC) and Pearson Linear Correlation Coefficient (PLCC) with state-of-the-art techniques used for BIQA. We fine-tuned the model by modifying the structure of the network especially the last layers for the target dataset and achieved remarkable performance for sixteen cases from 24 varied distortion cases.  

Downloads

Download data is not yet available.

References

Xue W., Zhang L., Mou X., and Bovik A., “Gradient magnitude similarity deviation: A highly efficient perceptual image quality index,” IEEE Transaction on Image Processing, 2014, Vol. 23, pp. 684–695.

Zhang L., Shen Y., and Li H., “VSI: A visual saliency-induced index for perceptual image quality assessment, “IEEE Transaction on Image Processing, 2014, Vol. 23, pp. 4270–4281.

Chang H., Yang H., Gan Y., and Wang M., “Sparse feature fidelity for perceptual image quality assessment,” IEEE Transaction on Image Processing, 2013, Vol. 22, pp. 4007–4018.

Ma L., Li S., Zhang F., and Ngan K., “Reduced-reference image quality assessment using reorganized DCT-based image representation,” IEEE Transaction on Multimedia, 2011, Vol. 13, pp. 824–829.

Liu Y., Zhai G., Gu K., Liu X., Zhao D., and Gao W., “Reduced reference image quality assessment in free-energy principle and sparse representation,” IEEE Transaction on Multimedia, 2018, Vol. 20, pp. 379–391.

Wu J., Liu Y., Li L., and Shi G., “Attended visual content degradation based reduced reference image quality assessment,” IEEE Access, 2018, Vol. 6, pp. 12493–12504.

Zhu W., Zhai G., Min X., Hu M., Liu J., Guo G., and Yang X., “Multi-channel decomposition in tandem with free-energy principle for reduced reference image quality assessment,” IEEE Transaction on Multimedia, 2019, Vol. 21, pp. 2334–2346.

Gu K., Wang S., Zhai G., Ma S., Yang X., Lin W., Zhang W., and Gao W., “Blind quality assessment of tone-mapped images via analysis of information, naturalness, and structure,” IEEE Transactions on Multimedia, 2016, Vol. 18, pp. 432–443.

Zhu H., Li L., Wu J., Dong W., and Shi G., “MetaIQA: Deep meta-learning for no-reference image quality assessment,” In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2022, pp. 14131–14140.

Ma K., Liu W., Zhang K., Duanmu Z., Wang Z., and Zuo W., “End-to-end blind image quality assessment using deep neural networks,” IEEE Transaction on Image Processing, 2018, Vol. 27, pp. 1202–1213.

Bosse S., Maniry D., Miller K., Wiegand T. and Samek W., “Deep neural networks for no-reference and full-reference image quality assessment,” IEEE Transactions on Image Processing, 2018, Vol. 27, pp. 206–219.

Su S., Yan Q., Zhu Y., Zhang C., Ge X., Sun J., and Zhang Y., “Blindly assess image quality in the wild guided by a self-adaptive hyper network,” In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020, pp. 3664–3673.

Tan M. and Le Q, “Efficientnetv2: Smaller models and faster training,” In International conference on machine learning, PMLR, 2021, pp. 10096–10106.

Ponomarenko N., Jin L., Ieremeiev O., et al., “Image database TID2013: Peculiarities, results and perspectives,” Signal Processing: Image Communication, 2015, Vol. 30, pp. 57–77.

Weixia Zhang, Kede Ma, Jia Yan, Dexiang Deng and Zhou Wang, “Blind Image Quality Assessment Using a Deep Bilinear Convolutional Neural Network,” IEEE Transactions on Circuits and Systems for video Technology, January 2020, Vol. 30, No. 1.

K. Ma et al., “Waterloo exploration database: New challenges for image quality assessment models,” IEEE Transactions on Image Processing, February 2017, Vol. 26, No. 2, pp. 1004–1016.

M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The PASCAL visual object classes (VOC) challenge,” International Journal of Computer Vision, June 2010 Vol. 88, No. 2, pp. 303–338.

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in Proc. Int. Conf. Learn. Represent., 2015.

J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and F.-F. Li, “ImageNet: A large-scale hierarchical image database,” In Proceeding of IEEE Conference of Computer Vision and Pattern Recognition, June 2009, pp. 248–255.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., June 2016, pp. 770–778.

Hu L., Peng J., Zhao T., Yu W., and Hu B., “A Blind Image Quality Index for Synthetic and Authentic Distortions with Hierarchical Feature Fusion,” Applied Science, 2023, Vol. 13, 3591.

Larson E. and Chandler D., “Most apparent distortion: Full-reference image quality assessment and the role of strategy,” Journal of Electron. Imaging, 2010, Vol. 19, pp. 1–21.

Ciancio A., Costa A., Silva E., Said A., Samadani R., and Obrador P., “No-reference blur assessment of digital pictures based on multi-feature classifiers, “IEEE Transactions on Image Processing, 2011, Vol. 20, pp. 64–75.

Ghadiyaram D. and Bovik A., “Massive online crowd-sourced study of subjective and objective picture quality,” IEEE Transactions on Image Processing, 2016, Vol. 25, pp. 372–387.

Hosu V., Lin H., Sziranyi T. and Saupe D., “Koniq-10k: An ecologically valid database for deep learning of blind image quality assessment,” IEEE Transactions on Image Processing, 2020, Vol. 29, pp. 4041-4056.

Lijing Lai, Jun Chu, and Lu Leng, “No-Reference Image Quality Assessment based on Quality Awareness Feature and Multi-task Training,” Journal of Multimedia Information System, vol. 9, No. 2, June 2022, pp. 75-86.

Feng C., Ye L., and Zhang Q., “Cross-Domain Feature Similarity Guided Blind Image Quality Assessment,” Frontiers in Neuroscience, 2022, Vol. 5, 767977.

He W. and Luo Z., “Blind Quality Assessment of Images Containing Objects of Interest,” Sensors, 2023, Vol. 23, 8205.

Ning Guo, Letu Qingge, YuanChen Huang, Kaushik Roy, YangGui Li and Pei Yang, “Blind Image Quality Assessment via Multi-perspective Consistency,” International Journal of Intelligent Systems, Vol. 2023, Article ID 4631995, 14 pages.

Pan Z., Zhang H., Lei J., et al., “DACNN: Blind image quality assessment via a distortion-aware convolutional neural network,” IEEE Transactions on Circuits and Systems for Video Technology, 2022, Vol. 32, pp. 7518– 7531.

Lee SH and Kim SW, “Dual-branch vision transformer for blind image quality assessment,” Journal of Visual Communication and Image Representation, 2023, Vol. 94, 103850.

Tan Ma and Le Q., “Efficientnet: Rethinking model scaling for convolutional neural networks. In: International conference on machine learning,” PMLR, 2019, pp. 6105–6114.

Russakovsky O., Deng J., Su H., Krause J., Satheesh S., Ma S., Huang Z., Karpathy A., Khosla A., Bernstein M., et al., “Imagenet large-scale visual recognition challenge. International Journal of Computer Vision, 2015, Vol. 115, No. 3, pp. 211–252.

Xu L., Lin W., and Kuo C., “Visual Quality Assessment by Machine Learning,” Springer Singapore, Imprint: Springer, 2015.

Moorthy AK and Bovik AC, “Blind image quality assessment: From natural scene statistics to perceptual quality,” IEEE Transactions on Image Processing, 2011, Vol. 20, No. 12, pp. 3350–3364.

Mittal A., Moorthy AK, and Bovik AC, “No-reference image quality assessment in the spatial domain,” IEEE Transactions on Image Processing, 2012, Vol. 21, pp. 4695–4708.

Ye P., Kumar J., Kang L., et al. “Unsupervised feature learning framework for no-reference image quality assessment,” IEEE Conference on Computer Vision and Pattern Recognition, 2012, pp. 1098–1105.

Zhang W., Ma K., Deng D., et al. “Blind image quality assessment using a deep bilinear convolutional neural network,” IEEE Transactions on Circuits and Systems for Video Technology, 2020, Vol. 30, pp. 36–47.

Xu J., Ye P., Li Q., et al. “Blind image quality assessment based on high order statistics aggregation. IEEE Transactions on Image Processing, 2016, Vol. 25, No. 9, pp. 4444–4457.

Kim J. and Lee S., “Fully deep blind image quality predictor,” IEEE Journal on Selected Topics in Signal Processing, 2017, Vol. 11, pp. 206–220.

Yan Q., Gong D., and Zhang Y., “Two-stream convolutional networks for blind image quality assessment,” IEEE Transactions on Image Processing, 2019, Vol. 28, pp. 2200–2211.

Wu J., Ma J., Liang F., et al., “End-to-end blind image quality prediction with a cascaded deep neural network,” IEEE Transactions on Image Processing, 2020, Vol. 29, pp. 7414–7426.

Zhang W., Ma K., Zhai G., et al., “Learning to blindly assess image quality in the laboratory and wild,” In Proceeding of IEEE International Conference on Image Processing (ICIP), 2020, pp. 111–115.

Downloads

Published

27.12.2023

How to Cite

Gabhane, Y. ., Jain, T. K. ., & Kamble, V. . (2023). Blind Synthetic Image Quality Assessment Using EfficientNet-V2 . International Journal of Intelligent Systems and Applications in Engineering, 12(9s), 352–361. Retrieved from https://ijisae.org/index.php/IJISAE/article/view/4324

Issue

Section

Research Article