Date of Award
Daniel Felix Ritchie School of Engineering and Computer Science, Electrical and Computer Engineering
David Wenzhong Gao
Dynamic control, Fault ride through, Frequency regulation, Power system, Reinforcement learning, Renewable energy
This dissertation investigates the feasibility and effectiveness of using Reinforcement Learning (RL) techniques for power system dynamic control, particularly voltage and frequency control. The conventional control strategies used in power systems are complex and time-consuming due to the complicated high-order nonlinearities of the system. RL, which is a type of neural network-based technique, has shown promise in solving these complex problems by fitting any nonlinear system with the proper network structure.
The proposed RL algorithm, called Guided Surrogate Gradient-based Evolution Strategy (GSES) determines the weights of the policy (which generates the action for our control reference signal) without back-propagation process for gradient update using a simultaneous perturbation stochastic approximation approach comparing to many other RL algorithms, thus it achieves a much faster and more robust learning convergence. It is introduced and implemented in three different power system scenarios: High Voltage Direct Current (HVDC) based inter-area oscillation damping system, Doubly-fed Induction Generator (DFIG) based Fault-Ride-Through (FRT) system, and modified IEEE-39 Bus based frequency regulation system. In the case of the HVDC-based system, the proposed GSES-based power oscillation damping control approach overcomes the challenges of setting optimal controller parameters of the HVDC under various system transient events. This approach is also shown to be superior to conventional power oscillation damping methods. Further, the GSES algorithm is found to be effective in controlling the DFIG power and capacitor DC-link voltage, which helps prevent the rotor of DFIG from over-current risk and maintain the grid-connected operation. Finally, the proposed RL-based solution for frequency response in wind farms is tested on a modified IEEE-39 bus system and is found to reliably support the frequency of the power system and prevent unnecessary load shedding.
Overall, this dissertation shows the potential of RL-based techniques in power system dynamic control, particularly frequency control, and provides evidence for the effectiveness of the GSES algorithm in various power system scenarios. The use of RL in power systems could lead to more efficient and effective control strategies during contingencies, which is crucial in maintaining the stability of today’s large, high-order nonlinear dynamic power systems.
Copyright is held by the author. User is responsible for all copyright compliance.
Received from ProQuest
Gao, Wei, "Power System Dynamic Control and Performance Improvement Based on Reinforcement Learning" (2023). Electronic Theses and Dissertations. 2253.