Below are the questions asked during the event, along with their respective answers.

Q: You’ve discussed data center energy consumption in the U.S. How does energy consumption by US data centers compare with the worldwide data center energy consumption?
A: In 2018, the global consumption of power by data centers was 198 TWh. In the same year in the U.S., data centers consumed about 75 TWh. Hence, energy consumption by U.S. data centers is about one-third of the global number.

Q: How much energy is needed to transport all the data they output to their end-users?
A: In 2018, data networks consumed around 260TWh globally, with mobile networks accounting for two-thirds of the total. Hence, global networks consumed approximately 30% more energy than did the data centers.

Q: With PUE, “Energy Powering the IT Equipment” still includes the infrastructure INSIDE the IT equipment for cooling. Notably, all fan power. When are we (Industry) going to start adopting a new more telling measure in which IT power (the denominator) includes digital power only (and not fans or pumps)?
A: In the hyper-scale data centers, the air movers are often external to rack-mounted servers. So, in this case, the IT and fan power values are separately accounted for, as you are proposing. However, within a server with integrated fans, there is no simple relationship between fan power and IT equipment power, so these two quantities would have to separately monitored for each server to get meaningful numbers. I just don’t think that there is much motivation in the industry at present to get to this level of granularity.

Q: Is that slight improvement predicted for 2020 due to the utilization of new cooling technologies, such as liquid cooling? If it is, why is this change and improvement very minor?
A: Liquid cooling was already in wide use in the hyper-scale data centers in 2016 and in common use in the larger, but more standard data centers. Here, water-cooled, rear-door heat exchangers are common, for example. As we approach 2020, further improvements tend to be incremental. Even though the changes in PUE are relatively small, the energy savings are actually quite large, since the IT equipment energy is very large.

Q: Are these data considered the changes in Moors law? is Moors law still applicable in its original form?
A: Moore’s law scaling leads to higher IT equipment performance per watt. The rate of improvement per year really depends on the investment decisions made by the IC fabs. So, the term, “Law”, is misleading. In past decades, it made business sense for the industry to investment level to maintain a doubling in the number of transistors per unit area every one and a half years. However, the cost of continuing to do so is prohibitive. I think that it’s more productive just to view it as a behavioral trend line that should be tracked with each passing year.

Q: On average, how much % power is used for cooling in a hyperscale data center?
A: Unfortunately, the PUE data don’t separately quantify the energy used in power conversion compared to that used in the cooling alone. At the current PUE value of 1.2 in the hyperscale data centers, the total infrastructure energy consumption is 17%. I would expect that the cooling energy would dominate this number.

Q: Thanks, Dr Guenin! Super presentation. Can you comment on direct liquid cooling of electronic components vs. submerged servers?
A: It’s my personal opinion that there would be serviceability issues in the use of submersible cooling in servers. I would expect that the time required to pull a faulty unit out of service would be longer with submersible cooling. That would be at odds with current service models.

Q: Do you think liquid cooling in Dc will increase?
A: Yes. Liquid cooling is already being used in the data center where it makes the most sense. I can only see it increasing.

Q: What’s the effect of more than Moore positive or negative on PUE?
A: Moore’s law scaling, historically has been the result of shrinking the size of individual transistors on an IC, getting more functionality, higher frequencies, and lower power on the IC. More than Moore has the same objectives, but through larger-scale integration using System-on-Chip (SoC) or System-in-Package SiP or 3D wafers. Getting more IT performance per watt is a good thing. That means fewer watts would be needed to perform the same workload. However, to first order, this won’t affect the PUE, which quantifies the efficiency of the infrastructure.

Q: Can you please talk to “end of Moore’s law scaling” and its impact on projected energy costs?
A: Moore’s law scaling, historically has been the result of shrinking the size of individual transistors on an IC, getting more functionality, higher frequencies, and lower power on the IC. More than Moore has the same objectives, but through larger-scale integration using System-on-Chip (SoC) or System-in-Package SiP or 3D wafers.

Q: Do you think it will be possible economically to recover thermal energy and then get PUE<1, especially with liquid cooling?
A: Recovering thermal energy from a data center gets more feasible as the temperature of the outflowing water gets higher. In general, however, the temperature of the ejected water is not high enough to run an efficient thermodynamic process. In colder climates, however, heated water from data centers has been piped into buildings to replace other heat sources.