Evaluation of Autoencoders: Training Using Original, Encoded and Decoded Images for Prediction

Authors

  • Nidhi B. Shah , Amit P. Ganatra

Keywords:

Autoencoder, encode, decode, dimensionality reduction, feature learning, neural network.

Abstract

An autoencoder (AE) is a neural network that seeks to make the same contribution to production and performance by directly reducing the inputs to the latent domain map and reconstructing the outputs from that map. Today data processing and reduction of data view size are considered to be the two most effective autoencoder systems. With the right size and sparsity limits, autoencoders can learn more interesting data predictions than PCA or other basic techniques. In this paper, we implement different types of autoencoders and evaluate them. Original image or raw image is taken and is trained. Using this training information testing is being performed on image prediction. This is being measured based on accuracy as well as time. The model is also applied by giving input as an encoded image which is an intermediate result of autoencoder and giving input as a decoded image a final result of autoencoder. All the above three models are applied and results are compared by implementing different types of autoencoders.

Downloads

Download data is not yet available.

References

G. N. Karagoz, A. Yazici, T. Dokeroglu, and A. Cosar, “Analysis of Multiobjective Algorithms for the Classification of Multi-Label Video Datasets,” IEEE Access, vol. 8, pp. 163937–163952, 2020, doi: 10.1109/access.2020.3022317.

J. Zhai, S. Zhang, J. Chen, and Q. He, “Autoencoder and Its Various Variants,” 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Oct. 2018, Published, doi: 10.1109/smc.2018.00080.

C.-Y. Chen, J.-S. Leu, and S. W. Prakosa, “Using autoencoder to facilitate information retention for data dimension reduction,” 2018 3rd International Conference on Intelligent Green Building and Smart Grid (IGBSG), Apr. 2018, Published, doi: 10.1109/igbsg.2018.8393545.

D. Tomar, Y. Prasad, M. K. Thakur, and K. K. Biswas, “Feature Selection Using Autoencoders,” 2017 International Conference on Machine Learning and Data Science (MLDS), Dec. 2017, Published, doi: 10.1109/mlds.2017.20.

F. Zhuang, X. Cheng, P. Luo, S. J. Pan, and Q. He, “Supervised Representation Learning with Double Encoding-Layer Autoencoder for Transfer Learning,” ACM Transactions on Intelligent Systems and Technology, vol. 9, no. 2, pp. 1–17, Oct. 2017, doi: 10.1145/3108257.

A. Caliskan, H. Badem, A. Basturk, and M. E. Yuksel, “The effect of autoencoders over reducing the dimensionality of a dermatology data set,” 2016 Medical Technologies National Congress (TIPTEKNO), Oct. 2016, Published, doi: 10.1109/tiptekno.2016.7863101.

“Autoencoders in Deep Learning: Tutorial & Use Cases [2023],” Autoencoders in Deep Learning: Tutorial & Use Cases [2023]. https://www.v7labs.com/blog/autoencoders-guide

K. Sangwan, “Journey through Sequence-to-Sequence models, Attention and Transformer,” Medium, Jul. 14, 2021. https://kuldeepsangwan.medium.com/journey-through-sequence-to-sequence-models-attention-and-transformer-56365c32e99e

“Explain about Sparse Autoencoder? | i2tutorials,” i2tutorials, Sep. 13, 2019. https://www.i2tutorials.com/explain-about-sparse-autoencoder/

“ML | Auto-Encoders - GeeksforGeeks,” GeeksforGeeks, Jun. 21, 2019. https://www.geeksforgeeks.org/ml-auto-encoders/

R. Khandelwal, “Deep Learning — Different Types of Autoencoders,” Medium, Jan. 25, 2019. https://medium.datadriveninvestor.com/deep-learning-different-types-of-autoencoders-41d4fa5f7570

N. Hubens, “Deep inside: Autoencoders,” Medium, Apr. 10, 2018. https://towardsdatascience.com/deep-inside-autoencoders-7e41f319999f

Downloads

Published

24.03.2024

How to Cite

Amit P. Ganatra, N. B. S. , . (2024). Evaluation of Autoencoders: Training Using Original, Encoded and Decoded Images for Prediction. International Journal of Intelligent Systems and Applications in Engineering, 12(3), 2563–2569. Retrieved from https://ijisae.org/index.php/IJISAE/article/view/5728

Issue

Section

Research Article