Microgrid is an effective way to improve the utilization rate of renewable energy and is an indispensable part of recent power networks. In microgrids, the deployment of energy management systems (EMS) ensures stable operation and maximizes energy efficiency. Due to the uncertainty of non-steerable generation and non-flexible consumption in the microgrid, it is challenging to design an energy management algorithm to schedule the steerable generators and storage. To address this problem, energy management system is modeled as a Markov Decision Process (MDP) with continuous action space in this paper. Then an off-line reinforcement learning algorithm is leveraged to help EMS make scheduling decisions. Compared with other EMS schemes based on deep reinforcement learning, our method can effectively utilize the optimal decision data generated by mathematical programming, i.e., expert knowledge, to improve learning efficiency and decision-making ability. Simulation based on real world data verifies that the proposed algorithm has better performance than other reinforcement learning algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.