Reinforcement Learning Methods for Setpoint Optimization and Control Method Design in Process Industry with Case Studies in Steel Strip Rolling and District Heating
Loading...
URL
Journal Title
Journal ISSN
Volume Title
School of Electrical Engineering |
Doctoral thesis (article-based)
| Defence date: 2024-08-23
Unless otherwise stated, all rights belong to the author. You may download, display and print this publication for Your own personal use. Commercial use is prohibited.
Authors
Date
Major/Subject
Mcode
Degree programme
Language
en
Pages
87 + app. 73
Series
Aalto University publication series DOCTORAL THESES, 148/2024
Abstract
Process industry necessitates precise control and monitoring for operational efficiency, safety, and productivity. Traditional approaches, such as first-principles models, empirical models, and trial-and-error methods, have been utilized, often involving simplification and linearization to address the intricate and dynamic nature of industrial processes. However, to enhance product quality and energy efficiency, there is a growing demand for intelligent and adaptive methodologies to compute optimal solutions for industrial processes. One significant challenge lies in the realm of setpoint optimization, where precise computation of equipment parameters to align with quality specifications is paramount. In the domain of process control, achieving high-quality products relies on the implementation of feedback control methods. However, devising adaptive control methodologies capable of dynamically responding to evolving conditions poses a substantial challenge. Recognizing the potential of reinforcement learning (RL) to learn from interactions, RL techniques have been adopted to learn policies for setpoint optimization and process control. In the context of setpoint optimization in strip rolling and fuel cost reduction in district heating, RL methodologies have been investigated to calculate and optimize setpoints for the systems. Leveraging environment models of the processes, RL agents generate optimal solutions based on machine capacity to meet customer demands. Furthermore, RL-based adaptive control methodologies have been developed for the steel strip rolling process, enabling dynamic responses to evolving conditions. To make the RL-based control policy more accurate and practical for industrial processes, an offline RL method that learns control policies directly from the data has been proposed to address biases originating from approximated environment models that impact the accuracy. Steel strip rolling and district heating have been selected to evaluate the efficacy of RL-based methods in addressing setpoint optimization and process control challenges. The results indicate that the proposed methods outperform the traditional approaches, marking substantial advancements in automation, optimization, and control methodologies within the process industry.Description
Supervising professor
Vyatkin, Valeriy, Prof., Aalto University, Department of Electrical Engineering and Automation, FinlandThesis advisor
Sierla, Seppo, Dr., Aalto University, Department of Electrical Engineering and Automation, FinlandSun, Jie, Prof., Northeastern University, China
Other note
Parts
-
[Publication 1]: Jifei Deng, Seppo Sierla, Jie Sun, and Valeriy Vyatkin, “Mass customization with reinforcement learning: Automatic reconfiguration of a production line,” Applied Soft Computing, vol. 145, p. 110547, Sep. 2023.
Full text in Acris/Aaltodoc: https://urn.fi/URN:NBN:fi:aalto-202308014559DOI: 10.1016/j.asoc.2023.110547 View at publisher
-
[Publication 2]: Jifei Deng, Miro Eklund, Seppo Sierla, Jouni Savolainen, Hannu Niemistö, Tommi Karhela, and Valeriy Vyatkin, “Application of reinforcement learning for energy consumption optimization of district heating system,” in 2023 IEEE 32nd International Symposium on Industrial Electronics (ISIE), Jun. 2023, pp. 1–6.
Full text in Acris/Aaltodoc: https://urn.fi/URN:NBN:fi:aalto-202310046182DOI: 10.1109/ISIE51358.2023.10228102 View at publisher
-
[Publication 3]: Jifei Deng, Miro Eklund, Seppo Sierla, Jouni Savolainen, Hannu Niemistö, Tommi Karhela, and Valeriy Vyatkin, “Deep reinforcement learning for fuel cost optimization in district heating,” Sustainable Cities and Society, vol. 99, p. 104955, Dec. 2023.
Full text in Acris/Aaltodoc: https://urn.fi/URN:NBN:fi:aalto-202310046194DOI: 10.1016/j.scs.2023.104955 View at publisher
- [Publication 4]: Jifei Deng, Seppo Sierla, Jie Sun, and Valeriy Vyatkin, “Dynamic Modeling of Strip Rolling Process Using Probabilistic Neural Network,” in 2024 IEEE 18th International Conference on Advanced Motion Control (AMC), Mar. 2024.
-
[Publication 5]: Jifei Deng, Seppo Sierla, Jie Sun, and Valeriy Vyatkin, “Reinforcement learning for industrial process control: A case study in flatness control in steel industry,” Computers in Industry, vol. 143, p. 103748, Dec. 2022.
Full text in Acris/Aaltodoc: https://urn.fi/URN:NBN:fi:aalto-202208174894DOI: 10.1016/j.compind.2022.103748 View at publisher
-
[Publication 6]: Jifei Deng, Seppo Sierla, Jie Sun, and Valeriy Vyatkin, “Offline reinforcement learning for industrial process control: A case study from steel industry,” Information Sciences, vol. 632, pp. 221–231, Jun. 2023.
Full text in Acris/Aaltodoc: https://urn.fi/URN:NBN:fi:aalto-202303292648DOI: 10.1016/j.ins.2023.03.019 View at publisher