Interpretable Control for Industrial Systems by Genetic Programming
Daniel Hein, Siemens Corporate Technology
Abstract
In industry, proportional–integral–derivative (PID) control is still a very important and common strategy in order to realize stable, efficient, and understandable control strategies. For many applications, it is critical that control theory experts are able to interpret the setup of a specific controller by analyzing its components. However, the field of application of regulating PID controllers is rather limited, since they are only linear, symmetric around the setpoint, do not incorporate planning, and cannot utilize direct knowledge of the process, for example.
Reinforcement learning (RL) on the other hand, is theoretically capable of solving any Markov decision process, which makes it applicable to a dramatically broader field of application compared to classical PID control. Moreover, since RL is not limited to linear control, can incorporate the full observation of the system, and is able to solve planning tasks, it is capable of outperforming PID controllers even in classical industrial applications.
However, there is a lack of trust in standard black-box RL policies like neural networks, since these solutions are mostly non-interpretable. Genetic programming reinforcement learning (GPRL) is a genetic programming approach for autonomously learning interpretable RL policies from previously recorded state transitions. GPRL has been evaluated on several real industrial applications, like traffic light and wind turbine optimization. Domain experts are able to interpret and discuss the learned policies and select promising candidates for deployment on street crossings or wind turbines. Empirical evaluations show that, despite the compact and easily interpretable form of the policies, their performance is very often on a par with other non-interpretable policy solutions.
CV
Daniel Hein is a research scientist at Siemens Corporate Technology in Munich, Germany, working in the area of applied machine learning and reinforcement learning. He received his M.Sc. degree in Informatics from the Technical University of Munich, Germany, in 2014. In 2018, he submitted his Ph.D. thesis on the topic of “Interpretable Reinforcement Learning Policies by Evolutionary Computation” in Informatics at Technical University of Munich, Germany. His main research interests include evolutionary algorithms, particle swarm optimization, genetic programming, interpretable reinforcement learning, and industrial applications of machine learning approaches.