Journal of Modern Power Systems and Clean Energy

ISSN 2196-5625 CN 32-1884/TK

Electric Vehicle Charging Management Based on Deep Reinforcement Learning
Author:
Affiliation:

1.School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China;2.Department of Electrical Engineering Center for Electric Power and Energy Smart Electric Components, Technical University of Denmark, Copenhagen, Denmark;3.Department of Energy Technology, Aalborg University, Aalborg, Denmark

Fund Project:

This work was supported by the Sichuan Science and Technology Program (No. 2020JDJQ0037).

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
    Abstract:

    A time-variable time-of-use electricity price can be used to reduce the charging costs for electric vehicle (EV) owners. Considering the uncertainty of price fluctuation and the randomness of EV owners commuting behavior, we propose a deep reinforcement learning based method for the minimization of individual EV charging cost. The charging problem is first formulated as a Markov decision process (MDP), which has unknown transition probability. A modified long short-term memory (LSTM) neural network is used as the representation layer to extract temporal features from the electricity price signal. The deep deterministic policy gradient (DDPG) algorithm, which has continuous action spaces, is used to solve the MDP. The proposed method can automatically adjust the charging strategy according to electricity price to reduce the charging cost of the EV owner. Several other methods to solve the charging problem are also implemented and quantitatively compared with the proposed method which can reduce the charging cost up to 70.2% compared with other benchmark methods.

    Reference
    Related
    Cited by
Get Citation
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:July 08,2020
  • Revised:November 03,2020
  • Adopted:
  • Online: May 12,2022
  • Published: