Abstract:
Mathematical optimization methods have been developed to a vast variety of complex problems in the field of process systems engineering (e.g., the scheduling of chemical batch processes). However, the use of these methods in online scheduling is hindered by the stochastic nature of the processes and prohibitively long solution times when optimized over long time horizons. The following questions are raised: When to trigger a rescheduling, how much computing resources to allocate, what optimization strategy to use, and how far ahead to schedule? We propose an approach where a reinforcement learning agent is trained to make the first two decisions (i.e., rescheduling timing and computing time allocation). Using neuroevolution of augmenting topologies (NEAT) as the reinforcement learning algorithm, the approach yields, on average, better closed-loop solutions than conventional rescheduling methods on three out of four studied routing problems. We also reflect on expanding the agent's decision-making to all four decisions. (C) 2020 Elsevier Ltd. All rights reserved.