Artificial Intelligence-Powered Development of Location Image Analysis Algorithm using Image Crawling and Deep Learning
Keywords:
Deep learning, Unsupervised Learning, Image Analysis, Convolutional Neural Network, location information, Location ImageAbstract
The research study was conducted for development of the advanced image analysis service system based on deep learning. CNN (Convolutional Neural Network) is built in this system to extract learning data collected from Google and Instagram. The service gets a place image of jeju island as an input and provides relevant location information of it based on its own learning data. The process in 6 primary parts starts by collecting image data, converting it into appropriate format, and perform training and prediction, filtering invalid training data and repeats the Learning Part. Accuracy improvement plans are applied throughout this study. In conclusion, the implemented system shows about 79.2% of prediction accuracy. When the system has plenty of learning data, it is expected to predict various places more accurately.
Downloads
References
AITUDE Homepage, https://www.aitude.com/supervised-vs-unsupervised-vs-reinforcement, last accessed 2021/05/13.
Towards Data Science Homepage, https://towardsdatascience.com/what-is-machine-learning-a-short-note-on-supervised-unsupervised-semi-supervised-and-aed1573ae9bb, last accessed 2021/05/13.
Michael A.: Neural Networks and Deep Learning. In: Chapter 6 Deep learning (2015).
Prabhu Homepage, https://medium.com/@RaghavPrabhu/understanding-of-convolutional-neural-network-cnn-deep-learning-99760835f148, last accessed 2020/08/20.
Torsa Talukdar Homepage, https://medium.com/@torsatalukdar11/why-do-we-need-activation-functions-in-neural-network-c72c340c78fa, last accessed 2020/08/22
Pham, T.: CNN-based Facial Expression Recognition with New Loss Function., pp. 10. Department of Information & Telecommunication Engineering, Graduate School of Soongsil University, South Korea (2019).
Machine Learning Mastery Homepage, https://machinelearningmastery.com/rectified-linear-activation-function-for-deep-learning-neural-networks/, last accessed 2020/08/20.
Great Learning Homepage, https://www.mygreatlearning.com/blog/what-is-tensorflow-machine-learning-library-explained/#1.1, last accessed 2020/08/20.
Taback, T.: Design of Experiments and Observation Studies, In: Factorial Design at Two Levels – 2^k Designs (2020).
Evan, M. and Rosenthal, J.: Probability and Statistics: The Science of Uncertainty. 2nd edn. University of Toronto, Canada.Scribbr Homepage, https://www.scribbr.com/statistics/two-way-anova/, last accessed 2021/03/15.
TensorFlow Homepage, https://www.tensorflow.org/tutorials/customization/basics, last ac-cessed 2021/01/23.
C. Zhang and T. Akashi, “Fast affine template matching over Galois field,” British Machine Vision Conference (BMVC), pp.121.1-121.11, 2015.
M.A. Fischler and R.C. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM, vol.24, no.6, pp.381-395, 1981.
J.-M. Morel and G. Yu, “ASIFT: A new framework for fully affine invariant image comparison,” SIAM Journal on Imaging Sciences (SIIMS), vol.2, no.2, pp.438-469, 2009.
L.G. Brown, “A survey of image registration techniques,” ACM Computing Surveys (CSUR), vol.24, no.4, pp.325-376, 1992.
D. Nair, R. Rajagopal, and L. Wenzel, “Pattern matching based on a generalized Fourier transform,” International Symposium on Optical Science and Technology, pp.472-480, International Society for Optics and Photonics, 2000.
M.-S. Choi and W.-Y. Kim, “A novel two stage template matching method for rotation and illumination invariance,” Pattern Recognit. (PR), vol.35, no.1, pp.119-129, 2002.
H.Y. Kim and S.A. de Araújo, “Grayscale template-matching invariant to rotation, scale, translation, brightness and contrast,” Pacific Rim Conference on Advances in Image and Video Technology (PSIVT), pp.100-113, Springer-Verlag, 2007.
A. Penate-Sanchez, L. Porzi, and F. Moreno-Noguer, “Matchability prediction for full-search template matching algorithms,” IEEE International Conference on 3D Vision (3DV), pp.353-361, 2015.
C. Zhang and T. Akashi, “Simplifying genetic algorithm: A bit order determined sampling method for adaptive template matching,” Irish Machine Vision and Image Processing Conference (IMVIP), pp.91-96, 2015.
M. Gundam and D. Charalampidis, “Fourier transform-based method for pattern matching: Affine invariance and beyond,” SPIE Defense+ Security, pp.94770I-94770I, International Society for Optics and Photonics, 2015.
S. Korman, D. Reichman, G. Tsur, and S. Avidan, “FAsT-Match: Fast affine template matching,” IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2013, pp.2331-2338, 2013.
F. Jurie and M. Dhome, “Real time robust template matching,” British Machine Vision Conference (BMVC), pp.10.1-10.10, 2002.
C. Zhang, Y. Yamagata, and T. Akashi, “Robust visual tracking via coupled randomness,” IEICE Trans. Inf. & Syst., vol.E98-D, no.5, pp.1080-1088, May 2015.
C. Zhang and T. Akashi, “High-speed and local-changes invariant image matching,” IEICE Trans. Inf. & Syst., vol.E98-D, no.11, pp.1958-1966, Nov. 2015.
J.H. Holland, Adaptation in natural and artificial systems: An introductory analysis with applications to biology, control and artificial intelligence, University of Michigan Press, 1975.
M. Hutter and S. Legg, “Fitness uniform optimization,” IEEE Trans. Evol. Comput., vol.10, no.5, pp.568-589, 2006.
J. Xiao, J. Hays, K.A. Ehinger, A. Oliva, and A. Torralba, “Sun database: Large-scale scene recognition from abbey to zoo,” IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2010, pp.3485-3492, 2010.
M. Everingham, L. Van Gool, C.K.I. Williams, J. Winn, and A. Zisserman, “The Pascal visual object classes (VOC) challenge,” Int. J. Computer Vis. (IJCV), vol.88, no.2, pp.303-338, 2010.
Downloads
Published
How to Cite
Issue
Section
License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.