Portfolio Optimization in Dynamic Markets: Reinforcement Learning for Investment

Authors

  • Paramita Sarkar Assistant Professor, JIS University, Kolkata, West Bengal
  • Pruthiranjan Dwibedi Research Scholar, School of Social, Financial and Human Sciences, KIIT Deemed to be University, Bhubaneswar
  • Shailesh Shivaji Deore Associate Professor, Department of Computer Engineering, SSVPS B S DEORE College of Engineering Dhule Maharashtra
  • Tukaram Gawali Assistant Professor, Government College of Engineering, Jalgaon, Maharashtra state, Maharashtra
  • Mayuri Diwakar Kulkarni Assistant Professor, Department of Computer Engineering, SVKM's Institute of Technology, Dhule, Maharashtra
  • Animesh Saha Research Scholar, Department of Commerce, Assam University, Silchar-788011, Assam, India.

Keywords:

Reinforcement Learning, Optimization, Machine Learning, Risk Management, Dynamic Market

Abstract

In today's volatile and ever-changing financial markets, optimizing portfolio allocation is a formidable job. This study offers a novel strategy for optimizing a portfolio in volatile markets by drawing on techniques from the field of Reinforcement Learning (RL). As a result of being slow to adapt to ever-changing market conditions, traditional investment strategies sometimes generate worse returns and expose their owners to more risk. In contrast, the RL-based method offers a dynamic and adaptable answer to this age-old problem. The proposed model uses RL to learn and improve its tactics over time in order to maximize returns while effectively limiting risk. Because RL is so malleable, the portfolio can instantly respond to fluctuations in the market, grabbing opportunities and avoiding setbacks. We conduct extensive tests using historical market data to evaluate our RL-based portfolio optimization approach and to compare it to conventional investment strategies. Our research proves that even in volatile markets, RL can produce superior risk-adjusted returns. We also shed light on the practical implementation of RL in portfolio management by providing insights into the key factors influencing its effectiveness. This research is an important first step in rethinking investment strategies for fluid markets. To improve investment outcomes and risk management, we give investors a robust framework for portfolio optimization that can survive and even thrive in volatile market environments. We accomplish this by making use of RL's tremendous potential.

Downloads

Download data is not yet available.

References

Z. Wang, S. Jin and W. Li, "Research on Portfolio Optimization Based on Deep Reinforcement Learning," 2022 4th International Conference on Machine Learning, Big Data and Business Intelligence (MLBDBI), Shanghai, China, 2022, pp. 391-395, doi: 10.1109/MLBDBI58171.2022.00081.

Moody and M. Saffell, "Learning to trade via direct reinforcement," in IEEE Transactions on Neural Networks, vol. 12, no. 4, pp. 875-889, July 2001, doi: 10.1109/72.935097.

L. Wei and Z. Weiwei, "Research on Portfolio Optimization Models Using Deep Deterministic Policy Gradient," 2020 International Conference on Robots & Intelligent System (ICRIS), Sanya, China, 2020, pp. 698-701, doi: 10.1109/ICRIS52159.2020.00174.

J. Henrydoss, S. Cruz, C. Li, M. Günther and T. E. Boult, "Enhancing Open-Set Recognition using Clustering-based Extreme Value Machine (C-EVM)," 2020 IEEE International Conference on Big Data (Big Data), Atlanta, GA, USA, 2020, pp. 441-448, doi: 10.1109/BigData50022.2020.9378012.

S. Ajani and M. Wanjari, "An Efficient Approach for Clustering Uncertain Data Mining Based on Hash Indexing and Voronoi Clustering," 2013 5th International Conference and Computational Intelligence and Communication Networks, 2013, pp. 486-490, doi: 10.1109/CICN.2013.106.

Khetani, V. ., Gandhi, Y. ., Bhattacharya, S. ., Ajani, S. N. ., & Limkar, S. . (2023). Cross-Domain Analysis of ML and DL: Evaluating their Impact in Diverse Domains. International Journal of Intelligent Systems and Applications in Engineering, 11(7s), 253–262.

E. Benhamou, D. Saltiel, J. -J. Ohana and J. Atif, "Detecting and adapting to crisis pattern with context based Deep Reinforcement Learning," 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 2021, pp. 10050-10057, doi: 10.1109/ICPR48806.2021.9412958.

L. Li, "Financial Trading with Feature Preprocessing and Recurrent Reinforcement Learning," 2021 16th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), Chengdu, China, 2021, pp. 162-169, doi: 10.1109/ISKE54062.2021.9755374.

Z. Shahbazi and Y. -C. Byun, "Improving the Cryptocurrency Price Prediction Performance Based on Reinforcement Learning," in IEEE Access, vol. 9, pp. 162651-162659, 2021, doi: 10.1109/ACCESS.2021.3133937.

N. Pai and V. Ilango, "A Comparative Study on Machine Learning Techniques in Assessment of Financial Portfolios," 2020 5th International Conference on Communication and Electronics Systems (ICCES), Coimbatore, India, 2020, pp. 876-882, doi: 10.1109/ICCES48766.2020.9137878.

T. Kabbani and E. Duman, "Deep Reinforcement Learning Approach for Trading Automation in the Stock Market," in IEEE Access, vol. 10, pp. 93564-93574, 2022, doi: 10.1109/ACCESS.2022.3203697.

I. V. Brandão, J. P. C. L. da Costa, B. J. G. Praciano, R. T. de Sousa and F. L. L. de Mendonça, "Decision support framework for the stock market using deep reinforcement learning," 2020 Workshop on Communication Networks and Power Systems (WCNPS), Brasilia, Brazil, 2020, pp. 1-6, doi: 10.1109/WCNPS50723.2020.9263712.

B. Itri, Y. Mohamed, Q. Mohammed, B. Omar and T. Mohamed, "Deep reinforcement learning strategy in automated trading systems," 2023 3rd International Conference on Innovative Research in Applied Science, Engineering and Technology (IRASET), Mohammedia, Morocco, 2023, pp. 1-8, doi: 10.1109/IRASET57153.2023.10152925.

C. Qian, W. Yu, X. Liu, D. Griffith and N. Golmie, "Towards Online Continuous Reinforcement Learning on Industrial Internet of Things," 2021 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/IOP/SCI), Atlanta, GA, USA, 2021, pp. 280-287, doi: 10.1109/SWC50871.2021.00046.

S. Goluža, T. Bauman, T. Kovačević and Z. Kostanjčar, "Imitation Learning for Financial Applications," 2023 46th MIPRO ICT and Electronics Convention (MIPRO), Opatija, Croatia, 2023, pp. 1130-1135, doi: 10.23919/MIPRO57284.2023.10159778.

T. Bai, Q. Lang, S. Song, Y. Fang and X. Liu, "Feature Fusion Deep Reinforcement Learning Approach for Stock Trading," 2022 41st Chinese Control Conference (CCC), Hefei, China, 2022, pp. 7240-7245, doi: 10.23919/CCC55666.2022.9901810.

W. Si, J. Li, P. Ding and R. Rao, "A Multi-objective Deep Reinforcement Learning Approach for Stock Index Future’s Intraday Trading," 2017 10th International Symposium on Computational Intelligence and Design (ISCID), Hangzhou, 2017, pp. 431-436, doi: 10.1109/ISCID.2017.210.

B. Belyakov and D. Sizykh, "Deep Reinforcement Learning Task for Portfolio Construction," 2021 International Conference on Data Mining Workshops (ICDMW), Auckland, New Zealand, 2021, pp. 1077-1082, doi: 10.1109/ICDMW53433.2021.00139.

X. Xie, "Quantitative Measurement Method of Tourism Contribution to Regional Economic Development based on Reinforcement Learning: from the Perspective of SVM," 2022 3rd International Conference on Electronics and Sustainable Communication Systems (ICESC), Coimbatore, India, 2022, pp. 1333-1336, doi: 10.1109/ICESC54411.2022.9885481.

M. Bende, M. Khandelwal, D. Borgaonkar and P. Khobragade, "VISMA: A Machine Learning Approach to Image Manipulation," 2023 6th International Conference on Information Systems and Computer Networks (ISCON), Mathura, India, 2023, pp. 1-5, doi: 10.1109/ISCON57294.2023.10112168.

R. Liu et al., "Computer Intelligent Investment Strategy Based on Deep Reinforcement Learning and Multi-Layer LSTM Network," 2022 IEEE 2nd International Conference on Data Science and Computer Application (ICDSCA), Dalian, China, 2022, pp. 1006-1015, doi: 10.1109/ICDSCA56264.2022.9988677.

C. -. T. Chen, A. -P. Chen and S. -H. Huang, "Cloning Strategies from Trading Records using Agent-based Reinforcement Learning Algorithm," 2018 IEEE International Conference on Agents (ICA), Singapore, 2018, pp. 34-37, doi: 10.1109/AGENTS.2018.8460078.

Y. Zhao, G. Chetty and D. Tran, "Deep Learning for Real Estate Trading," 2022 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE), Gold Coast, Australia, 2022, pp. 1-7, doi: 10.1109/CSDE56538.2022.10089222.

H. Wang and S. Yu, "Robo-Advising: Enhancing Investment with Inverse Optimization and Deep Reinforcement Learning," 2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA), Pasadena, CA, USA, 2021, pp. 365-372, doi: 10.1109/ICMLA52953.2021.00063.

A. N. Sihananto, A. P. Sari, M. E. Prasetyo, M. Y. Fitroni, W. N. Gultom and H. E. Wahanani, "Reinforcement Learning for Automatic Cryptocurrency Trading," 2022 IEEE 8th Information Technology International Seminar (ITIS), Surabaya, Indonesia, 2022, pp. 345-349, doi: 10.1109/ITIS57155.2022.10010206.

B. A. Usha, T. N. Manjunath and T. Mudunuri, "Commodity and Forex trade automation using Deep Reinforcement Learning," 2019 1st International Conference on Advanced Technologies in Intelligent Control, Environment, Computing & Communication Engineering (ICATIECE), Bangalore, India, 2019, pp. 27-31, doi: 10.1109/ICATIECE45860.2019.9063807.

A. Cigliano and F. Zampognaro, "A Machine Learning approach for routing in satellite Mega-Constellations," 2020 International Symposium on Advanced Electrical and Communication Technologies (ISAECT), Marrakech, Morocco, 2020, pp. 1-6, doi: 10.1109/ISAECT50560.2020.9523672.

Downloads

Published

29.01.2024

How to Cite

Sarkar, P. ., Dwibedi, P. ., Deore, S. S. ., Gawali, T. ., Kulkarni, M. D. ., & Saha, A. . (2024). Portfolio Optimization in Dynamic Markets: Reinforcement Learning for Investment. International Journal of Intelligent Systems and Applications in Engineering, 12(13s), 386–395. Retrieved from https://ijisae.org/index.php/IJISAE/article/view/4605

Issue

Section

Research Article

Most read articles by the same author(s)