CNN-Based Image Classification for Handwritten Digit Recognition

Authors

  • Kishan Kushwaha Department of Computer Science and Information Technology, Koneru Lakshmaiah Education Foundation, Vaddeswaram 522502, AP, India
  • Sakala Rahul Department of Computer Science and Information Technology, Koneru Lakshmaiah Education Foundation, Vaddeswaram 522502, AP, India
  • Shaik Eliyaz Department of Computer Science and Information Technology, Koneru Lakshmaiah Education Foundation, Vaddeswaram 522502, AP, India
  • Chaitanya Reddy Department of Computer Science and Information Technology, Koneru Lakshmaiah Education Foundation, Vaddeswaram 522502, AP, India
  • Amarendra K Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram 522502, AP, India

Keywords:

Handwritten digit recognition, CNN, MNIST dataset, image classification

Abstract

In the digital era, handwritten digit recognition (HDR) plays a pivotal role in converting analog information into a digital format. Traditional methods of digitizing handwritten content often come with substantial costs. This essay addresses the issue at hand by putting forth a very effective algorithm designed to accurately recognize handwritten digits from scanned images, thereby significantly reducing expenses. The study focuses on investigating and comparing the way different algorithms perform when categorizing handwritten numbers. The comparison is predicated on varying the number of hidden layers, using various epoch counts, and assessing accuracy. For the experiment, the popular Modified National Institute MNIST (Measurement, Technology, and Standards) dataset for assessment. The results of this study offer insightful information on improving HDR techniques to make handwritten information digitization easier and more affordable. In this study, a systematic exploration of HDR algorithms was conducted, varying key factors such as hidden layers and epochs. The algorithms accuracy in classifying handwritten digits from scanned images was thoroughly evaluated. Leveraging the Comprehensive Modified Nation Institute of Standards and Technology (MNIST) dataset, the research results offer detailed comparative analyses, revealing optimal configurations for HDR algorithms. These findings pave the way for significant advancements in the field, enabling industries reliant on digital conversion to adopt cost-effective, accurate, and efficient HDR methods for processing handwritten information.

Downloads

Download data is not yet available.

References

Cireşan, D., Meier, U., Masci, J., & Schmidhuber, J., 2012. Multi-column deep neural networks for image classification. In 2012 IEEE conference on computer vision and pattern recognition (pp. 3642-3649).

Deng, L., Yu, D., & Platt, J. C., 2013. Scalable stacking and learning for building deep architectures. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (pp. 2134-2138). IEEE.

Goodfellow, I., Bengio, Y., Courville, A., & Bengio, Y., 2016. Deep learning. MIT press Cambridge, 770.

Krizhevsky, A., Sutskever, I., & Hinton, G. E., 2017. ImageNet classification with deep convolutional neural networks. Communications of the ACM, 60(6), 84-90.

LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P., 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324.

Liao, Q., & Poggio, T., 2016. Bridging the gaps between residual learning, recurrent neural networks and visual cortex. arXiv preprint arXiv:1604.03640.

Simard, P. Y., Steinkraus, D., & Platt, J. C., 2003. Best practices for convolutional neural networks applied to visual document analysis. In Seventh International Conference on Document Analysis and Recognition, 2003. Proceedings. (Vol. 2, pp. 958-962). IEEE.

Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R., 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1), 1929-1958.

Xie, S., Girshick, R., Dollar, P., Tu, Z., & He, K., 2017. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1492-1500).

Downloads

Published

02.02.2024

How to Cite

Kushwaha, K. ., Rahul, S., Eliyaz, S. ., Reddy, C. ., & K, A. . (2024). CNN-Based Image Classification for Handwritten Digit Recognition. International Journal of Intelligent Systems and Applications in Engineering, 12(14s), 91–97. Retrieved from https://ijisae.org/index.php/IJISAE/article/view/4640

Issue

Section

Research Article