Journal of Modern Power Systems and Clean Energy

ISSN 2196-5625 CN 32-1884/TK

A Data-driven Method for Fast AC Optimal Power Flow Solutions via Deep Reinforcement Learning
Author:
Affiliation:

1.Electrical Engineering Department, University of Texas at Arlington, Arlington, TX 76019, USA;2.GEIRI North America, San Jose, CA 95134, USA;3.State Grid Jiangsu Electric Power Company, Nanjing, China

Fund Project:

This work was supported by State Grid Science and Technology Program “Research on Real-time Autonomous Control Strategies for Power Grid Based on AI Technologies” (No. 5700-201958523A-0-0-00).

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
    Abstract:

    With the increasing penetration of renewable energy, power grid operators are observing both fast and large fluctuations in power and voltage profiles on a daily basis. Fast and accurate control actions derived in real time are vital to ensure system security and economics. To this end, solving alternating current (AC) optimal power flow (OPF) with operational constraints remains an important yet challenging optimization problem for secure and economic operation of the power grid. This paper adopts a novel method to derive fast OPF solutions using state-of-the-art deep reinforcement learning (DRL) algorithm, which can greatly assist power grid operators in making rapid and effective decisions. The presented method adopts imitation learning to generate initial weights for the neural network (NN), and a proximal policy optimization algorithm to train and test stable and robust artificial intelligence (AI) agents. Training and testing procedures are conducted on the IEEE 14-bus and the Illinois 200-bus systems. The results show the effectiveness of the method with significant potential for assisting power grid operators in real-time operations.

    Reference
    Related
    Cited by
Get Citation
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:July 26,2020
  • Revised:
  • Adopted:
  • Online: December 03,2020
  • Published: