Deep Reinforcement Learning and Generative Adversarial Networks for Portfolio Optimization: An Innovative Approach to Enhance Investment Strategies
Keywords:
Portfolio optimization, QUANTUM GAN, DRL, Actor Critic AlgorithmAbstract
This research project presents an innovative approach to portfolio optimization by integrating Deep Reinforcement Learning (DRL) and Generative Adversarial Networks (QUANTUM GANs). The objective is to develop an intelligent portfolio management system that learns optimal investment strategies through DRL and generates synthetic portfolios aligned with predefined objectives using QUANTUM GANs. The synergy between these two machine learning paradigms aims to address the challenges posed by traditional portfolio optimization methods, offering a dynamic and adaptive solution in response to evolving market conditions. The DRL component of the system employs advanced algorithms to enable an intelligent agent to make real-time investment decisions based on historical financial data. This autonomous learning process allows the portfolio to adapt and optimize strategies, balancing risk and return dynamically. By generating synthetic portfolios, the system enhances diversification possibilities and adapts to various market scenarios. This unique combination of DRL and QUANTUM GANs opens new avenues for strategic decision-making, risk mitigation, and exploration of portfolio possibilities.
Downloads
References
Julio Cezar Soares Silva, Adiel Teixeira de Almeida Filho, "Using QUANTUM GAN-generated market simulations to guide genetic algorithms in index tracking optimization",Applied Soft Computing,Volume 145,2023, 110587, ISSN 1568-4946, https://doi.org/10.1016/j.asoc.
110587.
Chengyu Zhang, Abdallah Aaraba, "Dynamic Optimal Portfolio Construction with Reinforcement Learning", (September 16, 2022). Available at SSRN: https://ssrn.com/
abstract=4221316
Jang, J., & Seong, N. (2023). Deep reinforcement learning for stock portfolio optimization by connecting with modern portfolio theory. Expert Systems with Applications, 218, 119556. https://doi.org/10.1016/j.eswa.2023.119556
Kim, J., & Lee, M. (2023). Portfolio Optimization using Predictive Auxiliary Classifier Generative Adversarial Networks with Measuring Uncertainty. ArXiv. /abs/2304.11856
Samira Khonsa, Mehdi Agha Sarram, & Razieh Sheikhpour (2023). A Profitable Portfolio Allocation Strategy Based on Money Net-Flow adjusted Deep Reinforcement Learning. https://www.ijfifsa.ir/article_170053.html
Weiye Wu, & Carol Hargreaves (2023). Deep Reinforcement Learning Approach to Portfolio Optimization in the Australian Stock Market. https://papers.ssrn.com/
sol3/papers.cfm?abstract_id=4429448
Yashaswi, K. (2021). Deep reinforcement learning for portfolio optimization using latent feature state space (lfss) module. arXiv preprint arXiv:2102.06233.
J. Sen, A. Dutta and S. Mehtab, "Stock Portfolio Optimization Using a Deep Learning LSTM Model," 2021 IEEE Mysore Sub Section International Conference (MysuruCon), Hassan, India, 2021, pp. 263-271, doi: 10.1109/MysuruCon52639.2021.9641662.
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.