The control of shared energy assets within building clusters has traditionally been confined to a discrete action space, owing in part to a computationally intractable decision space. In this work, we leverage the current state of the art in reinforcement learning (RL) for continuous control tasks, the deep deterministic policy gradient (DDPG) algorithm, toward addressing this limitation. The goals of this paper are twofold: (i) to design an efficient charged/discharged dispatch policy for a shared battery system within a building cluster and (ii) to address the continuous domain task of determining how much energy should be charged/discharged at each decision cycle. Experimentally, our results demonstrate an ability to exploit factors such as energy arbitrage, along with the continuous action space toward demand peak minimization. This approach is shown to be computationally tractable, achieving efficient results after only 5 h of simulation. Additionally, the agent showed an ability to adapt to different building clusters, designing unique control strategies to address the energy demands of the clusters studied.
Automated Design of Energy Efficient Control Strategies for Building Clusters Using Reinforcement Learning
Contributed by the Design Automation Committee of ASME for publication in the JOURNAL OF MECHANICAL DESIGN. Manuscript received July 6, 2018; final manuscript received September 21, 2018; published online December 20, 2018. Assoc. Editor: Harrison M. Kim.
- Views Icon Views
- Share Icon Share
- Search Site
Odonkor, P., and Lewis, K. (December 20, 2018). "Automated Design of Energy Efficient Control Strategies for Building Clusters Using Reinforcement Learning." ASME. J. Mech. Des. February 2019; 141(2): 021704. https://doi.org/10.1115/1.4041629
Download citation file: