Learn Continuously, Act Discretely: Hybrid Action-Space Reinforcement Learning For Optimal Execution
Learn Continuously, Act Discretely: Hybrid Action-Space Reinforcement Learning For Optimal Execution
Feiyang Pan, Tongzhe Zhang, Ling Luo, Jia He, Shuoling Liu
Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 3912-3918.
https://doi.org/10.24963/ijcai.2022/543
Optimal execution is a sequential decision-making problem for cost-saving in algorithmic trading. Studies have found that reinforcement learning (RL) can help decide the order-splitting sizes. However, a problem remains unsolved: how to place limit orders at appropriate limit prices?
The key challenge lies in the ``continuous-discrete duality'' of the action space. On the one hand, the continuous action space using percentage changes in prices is preferred for generalization. On the other hand, the trader eventually needs to choose limit prices discretely due to the existence of the tick size, which requires specialization for every single stock with different characteristics (e.g., the liquidity and the price range). So we need continuous control for generalization and discrete control for specialization. To this end, we propose a hybrid RL method to combine the advantages of both of them. We first use a continuous control agent to scope an action subset, then deploy a fine-grained agent to choose a specific limit price. Extensive experiments show that our method has higher sample efficiency and better training stability than existing RL algorithms and significantly outperforms previous learning-based methods for order execution.
Keywords:
Multidisciplinary Topics and Applications: Finance
Machine Learning: Deep Reinforcement Learning