Further optimizing market making with deep reinforcement learning: an unconstrained approach
Loading...
Journal Title
Journal ISSN
Volume Title
School of Business |
Master's thesis
Unless otherwise stated, all rights belong to the author. You may download, display and print this publication for Your own personal use. Commercial use is prohibited.
Author
Date
2023
Department
Major/Subject
Mcode
Degree programme
Finance
Language
en
Pages
39 + 8
Series
Abstract
In recent years, market making has been increasingly studied as a reinforcement learning problem. The previous literature however constrains the market maker agent’s decision-making by not allowing it to freely set price and size. We expand on this literature by giving our agent more freedom in decision-making and removing these constraints allowing it to execute more complex trading strategies. We then compare the performance of our agent against a rule-based model proposed by Avellaneda & Stoikov (2008) in real order book data. Reinforcement learning is a machine learning method that consists of an agent that interacts with an environment and receives a positive or negative reward based on the result of its actions. The reward guides the agent to reinforce behaviour that is wanted from it and discourages making actions that lead to unwanted results. In the context of market making, deep reinforcement learning process optimizes the balance between profit generation and inventory risk that a market maker has to take. We test a variety of reward functions, that guide the decision making of our deep reinforcement learning agent, to identify the most effective and robust solution. We find that our agent trained using reinforcement learning significantly outperforms the Avellaneda and Stoikov model by achieving higher daily profits with lower inventory risk. The agent trades significantly more, doing almost three times more volume, but even when trading fees would be included, the agent would break even at a 0.048% fee whereas the breakeven fee for the benchmark model is just 0.015%. Our findings show that a machine learning approach can significantly improve on performance of heuristic models in complex optimization problems such as market making when compared to a heuristic model like the one developed by Avellaneda and Stoikov. Our most important contribution to the existing academic literature is showing that the agent capitalizes on being allowed to adjust its trade size and indeed trades with different sizes depending on the current state. Finally, we describe our machine learning pipeline and environment development in detail so that practitioners can further apply it in their work.Description
Thesis advisor
Kaustia, MarkkuKeywords
market making, deep reinforcement learning, optimization, unconstrained orders