TY - GEN
T1 - Turbulence-driven Autonomous Stock Trading using Deep Reinforcement Learning
AU - Jaggi, Ramneet
AU - Abbas, Muhammad Naveed
AU - Manzoor, Jawad
AU - Dwivedi, Rahul
AU - Asghar, Mamoona Naveed
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - This paper explores Deep Reinforcement Learning (DRL) algorithms for autonomous stock trading, aiming to replace stockbrokers with more efficient and profitable strategies. The study focuses on stock trading in finance, using DRL to analyze trends and exploit market fluctuations. It delves into the application of DRL techniques in stock trading, using Proximal Policy Optimization (PPO) algorithm, and, experimental results from training and testing on Dow 30 and S&P 500 datasets that reveal the effectiveness of incorporating turbulence indicators such as McClellan Oscillator (MCO) and Hindenburg principle. The incorporation of modified indicators: MCO and Hindenburg Omen, significantly influenced the trading system's performance. In comparing trading strategies, the turbulence-incorporated Proximal Policy Optimization (PPO-T) agent, initially funded with$1M exhibited consistent improvement and greater resilience, with slight portfolio value dips compared to the non-turbulence counterpart (PPO). Notably, PPO-T outperformed PPO during market turbulence, achieving portfolio value surged to 1.8 and 2.0 for DOW 30 and SP 500, respectively, surpassing the values of 1.5 and 1.9 attained by PPO. This underscores PPO-T's capacity for higher final portfolio values, emphasizing the efficacy of turbulence indicators in fortifying trading systems, especially in turbulent market conditions. The agent's performance evaluation involves tracking portfolio value changes over specified trading days.
AB - This paper explores Deep Reinforcement Learning (DRL) algorithms for autonomous stock trading, aiming to replace stockbrokers with more efficient and profitable strategies. The study focuses on stock trading in finance, using DRL to analyze trends and exploit market fluctuations. It delves into the application of DRL techniques in stock trading, using Proximal Policy Optimization (PPO) algorithm, and, experimental results from training and testing on Dow 30 and S&P 500 datasets that reveal the effectiveness of incorporating turbulence indicators such as McClellan Oscillator (MCO) and Hindenburg principle. The incorporation of modified indicators: MCO and Hindenburg Omen, significantly influenced the trading system's performance. In comparing trading strategies, the turbulence-incorporated Proximal Policy Optimization (PPO-T) agent, initially funded with$1M exhibited consistent improvement and greater resilience, with slight portfolio value dips compared to the non-turbulence counterpart (PPO). Notably, PPO-T outperformed PPO during market turbulence, achieving portfolio value surged to 1.8 and 2.0 for DOW 30 and SP 500, respectively, surpassing the values of 1.5 and 1.9 attained by PPO. This underscores PPO-T's capacity for higher final portfolio values, emphasizing the efficacy of turbulence indicators in fortifying trading systems, especially in turbulent market conditions. The agent's performance evaluation involves tracking portfolio value changes over specified trading days.
KW - deep reinforcement learning
KW - market indicators
KW - PPO
KW - stock trading
KW - turbulence
UR - http://www.scopus.com/inward/record.url?scp=85199458510&partnerID=8YFLogxK
U2 - 10.1109/ICMI60790.2024.10586200
DO - 10.1109/ICMI60790.2024.10586200
M3 - Conference contribution
AN - SCOPUS:85199458510
T3 - 2024 IEEE 3rd International Conference on Computing and Machine Intelligence, ICMI 2024 - Proceedings
BT - 2024 IEEE 3rd International Conference on Computing and Machine Intelligence, ICMI 2024 - Proceedings
A2 - Abdelgawad, Ahmed
A2 - Jamil, Akhtar
A2 - Hameed, Alaa Ali
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 3rd IEEE International Conference on Computing and Machine Intelligence, ICMI 2024
Y2 - 13 April 2024 through 14 April 2024
ER -