The increasing demand for cloud computing services to more flexibly use computing resources has led to a severe problem of excessive energy consumption in data centers. In recent research, the energy consumption of computing and cooling equipment has attracted much attention. To reduce the energy consumption in data centers, the energy efficiency of the cooling equipment needs to be considered. This study theoretically modeled the relations between the server's internal heat sources and exhaust temperatures. First, we proposed a model that expands the previous study to describe the temperature change delay and to apply it to multiple internal heat sources. Then, we experimentally evaluated the prediction accuracy of the proposed model using a real server and conducted a simulation to confirm the efficiency of this prediction model for load balancing. The simulation results showed that the proposed method reduced the average maximum server temperature by 0.27 °C compared to the conventional method, which theoretically leads to energy savings of 260 kW per day and 95 MW per year in typical data centers.