Advancements in reinforcement learning have allowed agents to play games of increased complexity. One potential use-case of this technology for the game industry would be for QA testing. This thesis investigates the possibility of training an autonomous agent inside a prevalent game engine, Unreal Engine 5.0, to explore a level and detect missing colliders.
The focus of this thesis is the study of the performance of a curiosity-driven AI agent. Curiosity allows agents to discover novel game states through intrinsic motivation, without explicitly defined goals. Coupled with a count-based exploration of the game’s level and a visualization of the agent’s path, the solution’s coverage and speed is evaluated. It is compared to a random policy and a human tester.
Results indicate that the agent’s coverage is sufficient to explore the whole level. Its performance in finding collision bugs is higher than both a random policy and a human tester.