DOI:10.35833/MPCE.2020.000522 |
| |
| |
A Data-driven Method for Fast AC Optimal Power Flow Solutions via Deep Reinforcement Learning |
| |
|
| |
Page view: 411
Net amount: 1054 |
| |
Author:
Yuhao Zhou1,Bei Zhang2,Chunlei Xu3,Tu Lan2,Ruisheng Diao2,Di Shi2,Zhiwei Wang2,Wei-Jen Lee1
|
Author Affiliation:
1.Electrical Engineering Department, University of Texas at Arlington, Arlington, TX 76019, USA;2.GEIRI North America, San Jose, CA 95134, USA;3.State Grid Jiangsu Electric Power Company, Nanjing, China
|
Foundation: |
This work was supported by State Grid Science and Technology Program “Research on Real-time Autonomous Control Strategies for Power Grid Based on AI Technologies” (No. 5700-201958523A-0-0-00). |
|
|
Abstract: |
With the increasing penetration of renewable energy, power grid operators are observing both fast and large fluctuations in power and voltage profiles on a daily basis. Fast and accurate control actions derived in real time are vital to ensure system security and economics. To this end, solving alternating current (AC) optimal power flow (OPF) with operational constraints remains an important yet challenging optimization problem for secure and economic operation of the power grid. This paper adopts a novel method to derive fast OPF solutions using state-of-the-art deep reinforcement learning (DRL) algorithm, which can greatly assist power grid operators in making rapid and effective decisions. The presented method adopts imitation learning to generate initial weights for the neural network (NN), and a proximal policy optimization algorithm to train and test stable and robust artificial intelligence (AI) agents. Training and testing procedures are conducted on the IEEE 14-bus and the Illinois 200-bus systems. The results show the effectiveness of the method with significant potential for assisting power grid operators in real-time operations. |
Keywords: |
Alternating current (AC) optimal power flow (OPF) ; deep reinforcement learning (DRL) ; imitation learning ; proximal policy optimization |
| |
Received:July 26, 2020
Online Time:2020/12/03 |
| |
|
|
View Full Text
Download reader
|