Paper
Reinforcement Learning Approach to Autonomous PID Tuning
Published Mar 1, 2022 · Oguzhan Dogru, Kirubakaran Velswamy, Fadi Ibrahim
2022 American Control Conference (ACC)
73
Citations
1
Influential Citations
Abstract
Many industrial processes utilize proportional-integral-derivative (PID) controllers due to their practicality and often satisfactory performance. The proper controller parameters depend highly on the operational conditions and process uncertainties. This dependence brings the necessity of frequent tuning for real-time control problems due to process drifts and operational condition changes. This study combines the recent developments in computer sciences and control theory to address the tuning problem. It formulates the PID tuning problem as a reinforcement learning task with constraints. The proposed scheme identifies an initial approximate step-response model and lets the agent learn dynamics off-line from the model with minimal effort. After achieving a satisfactory training performance on the model, the agent is fine-tuned on-line on the actual process to adapt to the real dynamics, thereby minimizing the training time on the real process and avoiding unnecessary wear, which can be beneficial for industrial applications. This sample efficient method is applied to a pilot-scale multi-modal tank system. The performance of the method is demonstrated by setpoint tracking and disturbance regulatory experiments.
This study proposes a reinforcement learning approach for autonomous PID tuning, minimizing training time and reducing wear in industrial processes.
Sign up to use Study Snapshot
Consensus is limited without an account. Create an account or sign in to get more searches and use the Study Snapshot.
Full text analysis coming soon...