An Empirical Assessment of Artificial Intelligence-Based Deep Reinforced Learning in Automatic Stock Trading
Keywords:Stock trading, deep reinforcement learning, artificial intelligence, efficiency
Stock trading is the process of buying and selling stocks to boost financial returns. The secret to effective stock trading is making the proper trading decisions at the right moments or developing a competent trading plan. In a lot of recent studies, machine learning (ML) approaches have been utilised to predict stock movements or prices in order to conduct stock trading. This research aims to examine the possibilities of deep reinforcement learning powered by Artificial Intelligence (AI) for enhancing the precision and efficiency of automated stock trading systems. It analyses the difficulties in performing automated stock trading and suggests a brand-new Deep Reinforcement Learning (DRL)method to overcome them. To forecast stock prices and make trading decisions, the suggested method combines a deep neural network and a Reinforcement Learning (RL) algorithm. Based on actual stock data, experiments are run to assess how well the suggested technique performs. The outcomes demonstrate that the suggested technique beats current trading strategies and can generate large increases in profitability.
Aboutorab, H., Hussain, O. K., Saberi, M., & Hussain, F. K. (2022). A reinforcement learning-based framework for disruption risk identification in supply chains. Future Generation Computer Systems, 126, 110-122.
He, Y., Yang, Y., Li, Y., & Sun, P. (2022, November). A Novel Deep Reinforcement Learning-based Automatic Stock Trading Method and a Case Study. In 2022 IEEE 1st Global Emerging Technology Blockchain Forum: Blockchain & Beyond (iGETblockchain) (pp. 1-6). IEEE.
Li, Y., Ni, P., & Chang, V. (2019, May). An Empirical Research on the Investment Strategy of Stock Market based on Deep Reinforcement Learning model. In COMPLEXIS (pp. 52-58).
Millea, A. (2021). Deep reinforcement learning for trading—A critical survey. Data, 6(11), 119.
Jyothi, R., & Krishnamurthy, G. N. (2022). Deep-Reinforcement Learning-Based Architecture for Multi-Objective Optimization of Stock Prediction. European Journal of Electrical Engineering and Computer Science, 6(4), 9-16.
Li, Y., Ni, P., & Chang, V. (2020). Application of deep reinforcement learning in stock trading strategies and stock forecasting. Computing, 102(6), 1305-1322.
Fischer, T. G. (2018). Reinforcement learning in financial markets-a survey (No. 12/2018). FAU Discussion Papers in Economics.
Bekiros, S. D. (2010). Fuzzy adaptive decision-making for boundedly rational traders in speculative stock markets. European Journal of Operational Research, 202(1), 285-293.
Fang, Y., Chen, J., & Xue, Z. (2019). Research on quantitative investment strategies based on deep learning. Algorithms, 12(2), 35.
Kim, Y., Ahn, W., Oh, K. J., & Enke, D. (2017). An intelligent hybrid trading system for discovering trading rules for the futures market using rough sets and genetic algorithms. Applied Soft Computing, 55, 127-140.
Zhang, Y., & Wu, L. (2009). Stock market prediction of S&P 500 via combination of improved BCO approach and BP neural network. Expert systems with applications, 36(5), 8849-8854.
Carta, S., Ferreira, A., Podda, A. S., Recupero, D. R., & Sanna, A. (2021). Multi-DQN: An ensemble of Deep Q-learning agents for stock market forecasting. Expert systems with applications, 164, 113820.
Li, Y., Zheng, W., & Zheng, Z. (2019a). Deep robust reinforcement learning for practical algorithmic trading. IEEE Access, 7, 108014-108022.
Ng, A. Y., Coates, A., Diel, M., Ganapathi, V., Schulte, J., Tse, B., ... & Liang, E. (2006). Autonomous inverted helicopter flight via reinforcement learning. In Experimental robotics IX: The 9th international symposium on experimental robotics (pp. 363-372). Springer Berlin Heidelberg.
Bao, W., Yue, J., & Rao, Y. (2017). A deep learning framework for financial time series using stacked autoencoders and long-short term memory. PloS one, 12(7), e0180944.
Sezer, O. B., & Ozbayoglu, A. M. (2018). Algorithmic financial trading with deep convolutional neural networks: Time series to image conversion approach. Applied Soft Computing, 70, 525-538.
Naeem, M., Rizvi, S. T. H., & Coronato, A. (2020). A gentle introduction to reinforcement learning and its application in different fields. IEEE access, 8, 209320-209344.
Mendi, A., Dogan, D., Erol, T., Topaloğlu, T., Kalfaoglu, E., & Altun, H. (2021). Applications of Reinforcement Learning and its Extension to Tactical Simulation Technologies. International Journal of Simulation: Systems, Science & Technology, 22, 14-15.
Kober, J., Bagnell, J. A., & Peters, J. (2013). Reinforcement learning in robotics: A survey. The International Journal of Robotics Research, 32(11), 1238-1274.
Chinthamu, N. ., Gooda, S. K. ., Venkatachalam, C. ., S., S. ., & Malathy, G. . (2023). IoT-based Secure Data Transmission Prediction using Deep Learning Model in Cloud Computing. International Journal on Recent and Innovation Trends in Computing and Communication, 11(4s), 68–76. https://doi.org/10.17762/ijritcc.v11i4s.6308
Dr. Govind Shah. (2017). An Efficient Traffic Control System and License Plate Detection Using Image Processing. International Journal of New Practices in Management and Engineering, 6(01), 20 - 25. Retrieved from http://ijnpme.org/index.php/IJNPME/article/view/52
Timande, S., Dhabliya, D. Designing multi-cloud server for scalable and secure sharing over web (2019) International Journal of Psychosocial Rehabilitation, 23 (5), pp. 835-841
How to Cite
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.