Further optimizing market making with deep reinforcement learning: an unconstrained approach

dc.contributorAalto Universityen
dc.contributorAalto-yliopistofi
dc.contributor.advisorKaustia, Markku
dc.contributor.authorAhlroos, Juuso
dc.contributor.authorVehniäinen, Ville
dc.contributor.departmentRahoituksen laitosfi
dc.contributor.schoolKauppakorkeakoulufi
dc.contributor.schoolSchool of Businessen
dc.date.accessioned2023-08-20T16:05:16Z
dc.date.available2023-08-20T16:05:16Z
dc.date.issued2023
dc.description.abstractIn recent years, market making has been increasingly studied as a reinforcement learning problem. The previous literature however constrains the market maker agent’s decision-making by not allowing it to freely set price and size. We expand on this literature by giving our agent more freedom in decision-making and removing these constraints allowing it to execute more complex trading strategies. We then compare the performance of our agent against a rule-based model proposed by Avellaneda & Stoikov (2008) in real order book data. Reinforcement learning is a machine learning method that consists of an agent that interacts with an environment and receives a positive or negative reward based on the result of its actions. The reward guides the agent to reinforce behaviour that is wanted from it and discourages making actions that lead to unwanted results. In the context of market making, deep reinforcement learning process optimizes the balance between profit generation and inventory risk that a market maker has to take. We test a variety of reward functions, that guide the decision making of our deep reinforcement learning agent, to identify the most effective and robust solution. We find that our agent trained using reinforcement learning significantly outperforms the Avellaneda and Stoikov model by achieving higher daily profits with lower inventory risk. The agent trades significantly more, doing almost three times more volume, but even when trading fees would be included, the agent would break even at a 0.048% fee whereas the breakeven fee for the benchmark model is just 0.015%. Our findings show that a machine learning approach can significantly improve on performance of heuristic models in complex optimization problems such as market making when compared to a heuristic model like the one developed by Avellaneda and Stoikov. Our most important contribution to the existing academic literature is showing that the agent capitalizes on being allowed to adjust its trade size and indeed trades with different sizes depending on the current state. Finally, we describe our machine learning pipeline and environment development in detail so that practitioners can further apply it in their work.en
dc.format.extent39 + 8
dc.format.mimetypeapplication/pdfen
dc.identifier.urihttps://aaltodoc.aalto.fi/handle/123456789/122529
dc.identifier.urnURN:NBN:fi:aalto-202308204875
dc.language.isoenen
dc.locationP1 Ifi
dc.programmeFinanceen
dc.relation.hasversionOpinnäyte on tehty yhteistyössä Ville Vehniäisen kanssa. https://urn.fi/URN:NBN:fi:aalto-202308204876fi
dc.relation.hasversionThis thesis is a co-operation with Ville Vehniäinen. https://urn.fi/URN:NBN:fi:aalto-202308204876en
dc.subject.keywordmarket makingen
dc.subject.keyworddeep reinforcement learningen
dc.subject.keywordoptimizationen
dc.subject.keywordunconstrained ordersen
dc.titleFurther optimizing market making with deep reinforcement learning: an unconstrained approachen
dc.typeG2 Pro gradu, diplomityöfi
dc.type.ontasotMaster's thesisen
dc.type.ontasotMaisterin opinnäytefi
local.aalto.electroniconlyyes
local.aalto.openaccessyes
Files
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
master_Ahlroos_Juuso_2023.pdf
Size:
2.41 MB
Format:
Adobe Portable Document Format