Energy consumption represents a significant cost in data center operation. In 2010, data centers world-wide consumed 240 billion kWh electricity (1.3% of the world total), enough to power 5+ Hong Kong or roughly the entire Spain. However, real-world statistics reveals that a large fraction of the energy is used to power idle servers when the workload is low. Dynamic provisioning techniques aim at saving this portion of the energy, by turning off unnecessary servers. In dynamic provisioning, it is a common approach to predict future workload to certain extent and exploit the information to achieve good performance. This naturally leads to the following fundamental questions:
– Can we design solutions that require zero future workload information, called online solutions, yet still achieve close-to-optimal performance?
– Can we characterize the benefit of knowing future workload information in dynamic provisioning?
In this work, we seek answers to the above questions. In particular, we develop online dynamic provisioning solutions with and without future workload information available. We first reveal an elegant structure of the off-line dynamic provisioning problem, which allows us to characterize the optimal solution in a “divide-and-conquer” manner. We then exploit this insight to design two online algorithms with competitive ratios 2-α and e/(e-1+α), respectively, where 0≤α≤1 is the normalized size of a look-ahead window in which exact workload prediction is available. We prove that these competitive ratios are the best possible for deterministic and randomized algorithms; hence, they characterize the benefit of predicting future workload. A fundamental observation is that future workload information beyond the full-size look-ahead window (corresponding to α=1) will not improve dynamic provisioning performance. We remark that our results hold as long as the overall energy demands (including mainly server, cooling, and power conditioning) is a convex and increasing function in the total number of active servers. Our algorithms are decentralized and easy to implement. We demonstrate 20-71% of energy saving in a case study using real-world traces.
More information can be found at http://www.ie.cuhk.edu.hk/~mhchen/projects/dynamic.provisioning.in.data.centers.html .
Minghua Chen received his B.Eng. and M.S. degrees from the Department of Electronic Engineering at Tsinghua University in 1999 and 2001, respectively. He received his Ph.D. degree from the Department of Electrical Engineering and Computer Sciences at University of California at Berkeley in 2006. He spent one year visiting Microsoft Research Redmond as a Postdoc Researcher. He joined the Department of Information Engineering, the Chinese University of Hong Kong, in 2007, where he currently is an Associate Professor. He is also currently an Adjunct Associate Professor in Tsinghua University, Institute of Interdisciplinary Information Sciences. He received the Eli Jury award from UC Berkeley in 2007 (presented to a graduate student or recent alumnus for outstanding achievement in the area of Systems, Communications, Control, or Signal Processing) and The Chinese University of Hong Kong Young Researcher Award in 2013. He also received several best paper awards, including the IEEE ICME Best Paper Award in 2009, the IEEE Transactions on Multimedia Prize Paper Award in 2009, and the ACM Multimedia Best Paper Award in 2012. He serves as TPC Co-Chair of ACM e-Energy 2016 and General Chair of ACM e-Energy 2017. He is currently an Associate Editor of the IEEE/ACM Transactions on Networking. His recent research interests include energy systems (e.g., smart power grids and energy-efficient data centers), intelligent transportation, distributed optimization, multimedia networking, wireless networking, network coding, and delay-constrained network information flow.