Reinforcement Learning Approach for Optimal Distributed Energy Management in a Microgrid

Elham Foruzan, Leen-Kiat Soh, Sohrab Asgarpoor

Research output: Contribution to journalArticle

7 Citations (Scopus)

Abstract

In this paper, a multiagent-based model is used to study distributed energy management in a microgrid (MG). The suppliers and consumers of electricity are modeled as autonomous agents, capable of making local decisions in order to maximize their own profit in a multiagent environment. For every supplier, a lack of information about customers and other suppliers creates challenges to optimal decision making in order to maximize its return. Similarly, customers face difficulty in scheduling their energy consumption without any information about suppliers and electricity prices. Additionally, there are several uncertainties involved in the nature of MGs due to variability in renewable generation output power and continuous fluctuation of customers' consumption. In order to prevail over these challenges, a reinforcement learning algorithm was developed to allow generation resources, distributed storages, and customers to develop optimal strategies for energy management and load scheduling without prior information about each other and the MG system. Case studies are provided to show how the overall performance of all entities converges as an emergent behavior to the Nash equilibrium, benefiting all agents.

Original languageEnglish (US)
Article number8331897
Pages (from-to)5749-5758
Number of pages10
JournalIEEE Transactions on Power Systems
Volume33
Issue number5
DOIs
StatePublished - Sep 2018

Fingerprint

Energy management
Reinforcement learning
Electricity
Scheduling
Autonomous agents
Dynamic loads
Learning algorithms
Profitability
Energy utilization
Decision making
Uncertainty

Keywords

  • Microgrid
  • distributed control
  • reinforcement learning
  • renewable generation

ASJC Scopus subject areas

  • Energy Engineering and Power Technology
  • Electrical and Electronic Engineering

Cite this

Reinforcement Learning Approach for Optimal Distributed Energy Management in a Microgrid. / Foruzan, Elham; Soh, Leen-Kiat; Asgarpoor, Sohrab.

In: IEEE Transactions on Power Systems, Vol. 33, No. 5, 8331897, 09.2018, p. 5749-5758.

Research output: Contribution to journalArticle

@article{0c68b5b9dbd243e59aeb92f9e0b6ccca,
title = "Reinforcement Learning Approach for Optimal Distributed Energy Management in a Microgrid",
abstract = "In this paper, a multiagent-based model is used to study distributed energy management in a microgrid (MG). The suppliers and consumers of electricity are modeled as autonomous agents, capable of making local decisions in order to maximize their own profit in a multiagent environment. For every supplier, a lack of information about customers and other suppliers creates challenges to optimal decision making in order to maximize its return. Similarly, customers face difficulty in scheduling their energy consumption without any information about suppliers and electricity prices. Additionally, there are several uncertainties involved in the nature of MGs due to variability in renewable generation output power and continuous fluctuation of customers' consumption. In order to prevail over these challenges, a reinforcement learning algorithm was developed to allow generation resources, distributed storages, and customers to develop optimal strategies for energy management and load scheduling without prior information about each other and the MG system. Case studies are provided to show how the overall performance of all entities converges as an emergent behavior to the Nash equilibrium, benefiting all agents.",
keywords = "Microgrid, distributed control, reinforcement learning, renewable generation",
author = "Elham Foruzan and Leen-Kiat Soh and Sohrab Asgarpoor",
year = "2018",
month = "9",
doi = "10.1109/TPWRS.2018.2823641",
language = "English (US)",
volume = "33",
pages = "5749--5758",
journal = "IEEE Transactions on Power Systems",
issn = "0885-8950",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "5",

}

TY - JOUR

T1 - Reinforcement Learning Approach for Optimal Distributed Energy Management in a Microgrid

AU - Foruzan, Elham

AU - Soh, Leen-Kiat

AU - Asgarpoor, Sohrab

PY - 2018/9

Y1 - 2018/9

N2 - In this paper, a multiagent-based model is used to study distributed energy management in a microgrid (MG). The suppliers and consumers of electricity are modeled as autonomous agents, capable of making local decisions in order to maximize their own profit in a multiagent environment. For every supplier, a lack of information about customers and other suppliers creates challenges to optimal decision making in order to maximize its return. Similarly, customers face difficulty in scheduling their energy consumption without any information about suppliers and electricity prices. Additionally, there are several uncertainties involved in the nature of MGs due to variability in renewable generation output power and continuous fluctuation of customers' consumption. In order to prevail over these challenges, a reinforcement learning algorithm was developed to allow generation resources, distributed storages, and customers to develop optimal strategies for energy management and load scheduling without prior information about each other and the MG system. Case studies are provided to show how the overall performance of all entities converges as an emergent behavior to the Nash equilibrium, benefiting all agents.

AB - In this paper, a multiagent-based model is used to study distributed energy management in a microgrid (MG). The suppliers and consumers of electricity are modeled as autonomous agents, capable of making local decisions in order to maximize their own profit in a multiagent environment. For every supplier, a lack of information about customers and other suppliers creates challenges to optimal decision making in order to maximize its return. Similarly, customers face difficulty in scheduling their energy consumption without any information about suppliers and electricity prices. Additionally, there are several uncertainties involved in the nature of MGs due to variability in renewable generation output power and continuous fluctuation of customers' consumption. In order to prevail over these challenges, a reinforcement learning algorithm was developed to allow generation resources, distributed storages, and customers to develop optimal strategies for energy management and load scheduling without prior information about each other and the MG system. Case studies are provided to show how the overall performance of all entities converges as an emergent behavior to the Nash equilibrium, benefiting all agents.

KW - Microgrid

KW - distributed control

KW - reinforcement learning

KW - renewable generation

UR - http://www.scopus.com/inward/record.url?scp=85052732987&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85052732987&partnerID=8YFLogxK

U2 - 10.1109/TPWRS.2018.2823641

DO - 10.1109/TPWRS.2018.2823641

M3 - Article

AN - SCOPUS:85052732987

VL - 33

SP - 5749

EP - 5758

JO - IEEE Transactions on Power Systems

JF - IEEE Transactions on Power Systems

SN - 0885-8950

IS - 5

M1 - 8331897

ER -