Further optimizing market making with deep reinforcement learning: an unconstrained approach
dc.contributor | Aalto University | en |
dc.contributor | Aalto-yliopisto | fi |
dc.contributor.advisor | Kaustia, Markku | |
dc.contributor.author | Ahlroos, Juuso | |
dc.contributor.author | Vehniäinen, Ville | |
dc.contributor.department | Rahoituksen laitos | fi |
dc.contributor.school | Kauppakorkeakoulu | fi |
dc.contributor.school | School of Business | en |
dc.date.accessioned | 2023-08-20T16:05:16Z | |
dc.date.available | 2023-08-20T16:05:16Z | |
dc.date.issued | 2023 | |
dc.description.abstract | In recent years, market making has been increasingly studied as a reinforcement learning problem. The previous literature however constrains the market maker agent’s decision-making by not allowing it to freely set price and size. We expand on this literature by giving our agent more freedom in decision-making and removing these constraints allowing it to execute more complex trading strategies. We then compare the performance of our agent against a rule-based model proposed by Avellaneda & Stoikov (2008) in real order book data. Reinforcement learning is a machine learning method that consists of an agent that interacts with an environment and receives a positive or negative reward based on the result of its actions. The reward guides the agent to reinforce behaviour that is wanted from it and discourages making actions that lead to unwanted results. In the context of market making, deep reinforcement learning process optimizes the balance between profit generation and inventory risk that a market maker has to take. We test a variety of reward functions, that guide the decision making of our deep reinforcement learning agent, to identify the most effective and robust solution. We find that our agent trained using reinforcement learning significantly outperforms the Avellaneda and Stoikov model by achieving higher daily profits with lower inventory risk. The agent trades significantly more, doing almost three times more volume, but even when trading fees would be included, the agent would break even at a 0.048% fee whereas the breakeven fee for the benchmark model is just 0.015%. Our findings show that a machine learning approach can significantly improve on performance of heuristic models in complex optimization problems such as market making when compared to a heuristic model like the one developed by Avellaneda and Stoikov. Our most important contribution to the existing academic literature is showing that the agent capitalizes on being allowed to adjust its trade size and indeed trades with different sizes depending on the current state. Finally, we describe our machine learning pipeline and environment development in detail so that practitioners can further apply it in their work. | en |
dc.format.extent | 39 + 8 | |
dc.format.mimetype | application/pdf | en |
dc.identifier.uri | https://aaltodoc.aalto.fi/handle/123456789/122529 | |
dc.identifier.urn | URN:NBN:fi:aalto-202308204875 | |
dc.language.iso | en | en |
dc.location | P1 I | fi |
dc.programme | Finance | en |
dc.relation.hasversion | Opinnäyte on tehty yhteistyössä Ville Vehniäisen kanssa. https://urn.fi/URN:NBN:fi:aalto-202308204876 | fi |
dc.relation.hasversion | This thesis is a co-operation with Ville Vehniäinen. https://urn.fi/URN:NBN:fi:aalto-202308204876 | en |
dc.subject.keyword | market making | en |
dc.subject.keyword | deep reinforcement learning | en |
dc.subject.keyword | optimization | en |
dc.subject.keyword | unconstrained orders | en |
dc.title | Further optimizing market making with deep reinforcement learning: an unconstrained approach | en |
dc.type | G2 Pro gradu, diplomityö | fi |
dc.type.ontasot | Master's thesis | en |
dc.type.ontasot | Maisterin opinnäyte | fi |
local.aalto.electroniconly | yes | |
local.aalto.openaccess | yes |
Files
Original bundle
1 - 1 of 1
No Thumbnail Available
- Name:
- master_Ahlroos_Juuso_2023.pdf
- Size:
- 2.41 MB
- Format:
- Adobe Portable Document Format