TY - GEN
T1 - Improving energy efficiency in data centers by controlling task distribution and cooling
AU - Nakajo, Yusuke
AU - Athavale, Jayati
AU - Yoda, Minami
AU - Joshi, Yogendra
AU - Nishi, Hiroaki
N1 - Funding Information:
This work was supported by Technology Foundation of the R&D project “Design of Information and Communication Platform for Future Smart Community Services” by the Ministry of Internal Affairs and Communications of Japan. Moreover, the authors express their gratitude to MEXT/JSPS KAKENHI Grant (B) Numbers JP17H01739 and Global Smart Society Creation Project Research of Keio University. The measurements reported in this paper were conducted at the Georgia Tech Data Center Laboratory from September through December of 2017.
Publisher Copyright:
Copyright © 2018 ASME
PY - 2018
Y1 - 2018
N2 - The rapid growth in cloud computing, the Internet of Things (IoT), and data processing via Machine Learning (ML), have greatly increased our need for computing resources. Given this rapid growth, it is expected that data centers will consume more and more of our global energy supply. Improving their energy efficiency is therefore crucial. One of the biggest sources of energy consumption is the energy required to cool the data centers, and ensure that the servers stay within their intended operating temperature range. Indeed, about 40% of a data center’s total power consumption is for air conditioning[1]. Here, we study how the server air inlet and outlet, as well as the CPU, temperatures depend upon server loads typical of real Internet Protocol (IP) traces. The trace data used here are from Google clusters and include the times, job and task ID, as well as the number and usage of CPU cores. The resulting IT loads are distributed using standard load-balancing methods such as Round Robin (RR) and the CPU utilization method. Experiments are conducted in the Data Center Laboratory (DCL) at the Georgia Institute of Technology to monitor the server outlet air temperature, as well as real-time CPU temperatures for servers at different heights within the rack. Server temperatures were measured by on-line temperature monitoring with Xbee, Raspberry PI, Arduino, and hot-wire anemometers. Given that the temperature response varies with server position, in part due to spatial variations in the cooling airflow over the rack inlet and the server fan speeds, a new load-balancing approach that accounts for spatially varying temperature response within a rack is tested and validated in this paper.
AB - The rapid growth in cloud computing, the Internet of Things (IoT), and data processing via Machine Learning (ML), have greatly increased our need for computing resources. Given this rapid growth, it is expected that data centers will consume more and more of our global energy supply. Improving their energy efficiency is therefore crucial. One of the biggest sources of energy consumption is the energy required to cool the data centers, and ensure that the servers stay within their intended operating temperature range. Indeed, about 40% of a data center’s total power consumption is for air conditioning[1]. Here, we study how the server air inlet and outlet, as well as the CPU, temperatures depend upon server loads typical of real Internet Protocol (IP) traces. The trace data used here are from Google clusters and include the times, job and task ID, as well as the number and usage of CPU cores. The resulting IT loads are distributed using standard load-balancing methods such as Round Robin (RR) and the CPU utilization method. Experiments are conducted in the Data Center Laboratory (DCL) at the Georgia Institute of Technology to monitor the server outlet air temperature, as well as real-time CPU temperatures for servers at different heights within the rack. Server temperatures were measured by on-line temperature monitoring with Xbee, Raspberry PI, Arduino, and hot-wire anemometers. Given that the temperature response varies with server position, in part due to spatial variations in the cooling airflow over the rack inlet and the server fan speeds, a new load-balancing approach that accounts for spatially varying temperature response within a rack is tested and validated in this paper.
KW - Data Center
KW - Load Balancing
KW - Wireless Sensor System
UR - http://www.scopus.com/inward/record.url?scp=85057238771&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85057238771&partnerID=8YFLogxK
U2 - 10.1115/IPACK2018-8305
DO - 10.1115/IPACK2018-8305
M3 - Conference contribution
AN - SCOPUS:85057238771
T3 - ASME 2018 International Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Microsystems, InterPACK 2018
BT - ASME 2018 International Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Microsystems, InterPACK 2018
PB - American Society of Mechanical Engineers (ASME)
T2 - ASME 2018 International Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Microsystems, InterPACK 2018
Y2 - 27 August 2018 through 30 August 2018
ER -