An Empirical Assessment of Artificial Intelligence-Based Deep Reinforced Learning in Automatic Stock Trading

Authors

  • Joon Soo Yoo Department of Public Administration, Hallym Polytechnic University 48, Janghak-gil, Dong-myeon, Chuncheon-si, Gangwon-do, Republic of Korea

Keywords:

Stock trading, deep reinforcement learning, artificial intelligence, efficiency

Abstract

Stock trading is the process of buying and selling stocks to boost financial returns. The secret to effective stock trading is making the proper trading decisions at the right moments or developing a competent trading plan. In a lot of recent studies, machine learning (ML) approaches have been utilised to predict stock movements or prices in order to conduct stock trading. This research aims to examine the possibilities of deep reinforcement learning powered by Artificial Intelligence (AI) for enhancing the precision and efficiency of automated stock trading systems. It analyses the difficulties in performing automated stock trading and suggests a brand-new Deep Reinforcement Learning (DRL)method to overcome them. To forecast stock prices and make trading decisions, the suggested method combines a deep neural network and a Reinforcement Learning (RL) algorithm. Based on actual stock data, experiments are run to assess how well the suggested technique performs. The outcomes demonstrate that the suggested technique beats current trading strategies and can generate large increases in profitability.

Downloads

Download data is not yet available.

References

Aboutorab, H., Hussain, O. K., Saberi, M., & Hussain, F. K. (2022). A reinforcement learning-based framework for disruption risk identification in supply chains. Future Generation Computer Systems, 126, 110-122.

He, Y., Yang, Y., Li, Y., & Sun, P. (2022, November). A Novel Deep Reinforcement Learning-based Automatic Stock Trading Method and a Case Study. In 2022 IEEE 1st Global Emerging Technology Blockchain Forum: Blockchain & Beyond (iGETblockchain) (pp. 1-6). IEEE.

Li, Y., Ni, P., & Chang, V. (2019, May). An Empirical Research on the Investment Strategy of Stock Market based on Deep Reinforcement Learning model. In COMPLEXIS (pp. 52-58).

Millea, A. (2021). Deep reinforcement learning for trading—A critical survey. Data, 6(11), 119.

Jyothi, R., & Krishnamurthy, G. N. (2022). Deep-Reinforcement Learning-Based Architecture for Multi-Objective Optimization of Stock Prediction. European Journal of Electrical Engineering and Computer Science, 6(4), 9-16.

Li, Y., Ni, P., & Chang, V. (2020). Application of deep reinforcement learning in stock trading strategies and stock forecasting. Computing, 102(6), 1305-1322.

Fischer, T. G. (2018). Reinforcement learning in financial markets-a survey (No. 12/2018). FAU Discussion Papers in Economics.

Bekiros, S. D. (2010). Fuzzy adaptive decision-making for boundedly rational traders in speculative stock markets. European Journal of Operational Research, 202(1), 285-293.

Fang, Y., Chen, J., & Xue, Z. (2019). Research on quantitative investment strategies based on deep learning. Algorithms, 12(2), 35.

Kim, Y., Ahn, W., Oh, K. J., & Enke, D. (2017). An intelligent hybrid trading system for discovering trading rules for the futures market using rough sets and genetic algorithms. Applied Soft Computing, 55, 127-140.

Zhang, Y., & Wu, L. (2009). Stock market prediction of S&P 500 via combination of improved BCO approach and BP neural network. Expert systems with applications, 36(5), 8849-8854.

Carta, S., Ferreira, A., Podda, A. S., Recupero, D. R., & Sanna, A. (2021). Multi-DQN: An ensemble of Deep Q-learning agents for stock market forecasting. Expert systems with applications, 164, 113820.

Li, Y., Zheng, W., & Zheng, Z. (2019a). Deep robust reinforcement learning for practical algorithmic trading. IEEE Access, 7, 108014-108022.

Ng, A. Y., Coates, A., Diel, M., Ganapathi, V., Schulte, J., Tse, B., ... & Liang, E. (2006). Autonomous inverted helicopter flight via reinforcement learning. In Experimental robotics IX: The 9th international symposium on experimental robotics (pp. 363-372). Springer Berlin Heidelberg.

Bao, W., Yue, J., & Rao, Y. (2017). A deep learning framework for financial time series using stacked autoencoders and long-short term memory. PloS one, 12(7), e0180944.

Sezer, O. B., & Ozbayoglu, A. M. (2018). Algorithmic financial trading with deep convolutional neural networks: Time series to image conversion approach. Applied Soft Computing, 70, 525-538.

Naeem, M., Rizvi, S. T. H., & Coronato, A. (2020). A gentle introduction to reinforcement learning and its application in different fields. IEEE access, 8, 209320-209344.

Mendi, A., Dogan, D., Erol, T., Topaloğlu, T., Kalfaoglu, E., & Altun, H. (2021). Applications of Reinforcement Learning and its Extension to Tactical Simulation Technologies. International Journal of Simulation: Systems, Science & Technology, 22, 14-15.

Kober, J., Bagnell, J. A., & Peters, J. (2013). Reinforcement learning in robotics: A survey. The International Journal of Robotics Research, 32(11), 1238-1274.

Chinthamu, N. ., Gooda, S. K. ., Venkatachalam, C. ., S., S. ., & Malathy, G. . (2023). IoT-based Secure Data Transmission Prediction using Deep Learning Model in Cloud Computing. International Journal on Recent and Innovation Trends in Computing and Communication, 11(4s), 68–76. https://doi.org/10.17762/ijritcc.v11i4s.6308

Dr. Govind Shah. (2017). An Efficient Traffic Control System and License Plate Detection Using Image Processing. International Journal of New Practices in Management and Engineering, 6(01), 20 - 25. Retrieved from http://ijnpme.org/index.php/IJNPME/article/view/52

Timande, S., Dhabliya, D. Designing multi-cloud server for scalable and secure sharing over web (2019) International Journal of Psychosocial Rehabilitation, 23 (5), pp. 835-841

Downloads

Published

10.11.2023

How to Cite

Yoo, J. S. . (2023). An Empirical Assessment of Artificial Intelligence-Based Deep Reinforced Learning in Automatic Stock Trading. International Journal of Intelligent Systems and Applications in Engineering, 12(4s), 470–476. Retrieved from https://ijisae.org/index.php/IJISAE/article/view/3808

Issue

Section

Research Article