Reducing Energy Consumption From Data Center Cooling By Eliminating Heat-Generating Components
As AI Energy Demands Grow, Improved Power Electronics Will Be Needed
As AI becomes integrated into daily processes and data centers are built out to meet the computing needs of various sectors, electricity demand from this industry is expected to double by 2030 and potentially reach up 1,200 TWh by 2035. Tech giants such as Google, Microsoft, Meta, and others have notably increased emissions in the past year to meet demands from AI. With these trends, reaching net-zero in the near term will pose a significant challenge.
These technology companies and data center operators are scrambling to construct new centers, unlock existing capacity from the grid to scale projects and identify new methods of innovative energy generation (hello- fusion, geothermal?) as quickly as possible. However, the question remains, once you build these data centers, what are the most effective methods to improve efficiency and reduce the overall energy consumed? Are there ways to fit more servers and racks into each building and optimize the processes?
There is one area in particular which may see significant improvement in the coming years and that is power delivery.
Reducing the Number of Voltage Regulators on the Server Blade Can Reduce Heat Produced
On each server there is the Graphics Processing Unit (GPU) where the main computing processes occur, but there are also numerous other components that support these functions. Voltage regulators and power delivery components make up a substantial portion of server surface occupying up to 70-90% of the server blade and are a significant source of heat.
These components are necessary to ensure that power is correctly and precisely delivered to the GPU which is sensitive to fluctuations (and also a very expensive asset). However, if there are solutions that can improve power delivery on the server, you can potentially reduce the number of components used and the heat generated, thereby reducing the overall energy required to cool the system.
Daanaa’s Power Transaction Unit Can Reduce the Number of Heat-Generating Components in Data Centers and Support Other Industries
One company looking to work on this solution is Daanaa. Daanaa is developing a multi-functional programmable Power Transaction Unit (PTU) that addresses this challenge. Daanaa’s technology can do high power multi-step functions in a single converter and drastically reduce the number of components used in voltage conversion processes.
The core innovation in Daanaa’s technology is centered around manipulating near-field reactive electro-magnetic spectrum to produce a single step high-efficiency voltage conversion system capable of 100x conversions (e.g., 3V-800 VDC) with over 95% efficiency. This ability to control and manipulate power systems in a bidirectional capacity can be used to perform functions that would normally be required by multiple components in a single, compact module that could fit beneath the GPU on the server. This technology would reduce ohmic losses and open more space on the server.
While there are numerous ways that Daanaa’s technology can support the tech sector, there are also a variety of applications in other industries as well, notably with solar and electric vehicles. When integrated with solar panels and systems, Daanaa’s PTU can improve the functionality and reduce the likelihood of failure. Distributed electronics can support efficiency, reduce hot spots and overheating, improve monitoring, and minimize downtime.
Similarly, Daanaa’s technology could even support Vehicle Integrated Photovoltaic Systems (VIPV). The PTU would support these applications by providing electronic control for VIPV curved surfaces, adjust to dynamic lighting and shading conditions, and integrate with charging systems. The PTU can also enable the batteries to run parallel instead of in series to isolate faults and minimize downtime.
What’s Next?
As data center operators and technology companies work together to build out the next generation of data centers purpose built for AI, they will likely be developing systems around direct-to-chip cooling. This is when cold plates and channels of liquid are passed over the hottest parts of the server to cool the systems generating heat. However, one of the challenges with direct-to-chip cooling is that as the GPUs get more powerful, more heat is produced, and direct-to-chip cooling systems will often require supplemental cooling technologies, like rear-door heat exchangers and other cooling mechanisms to completely cool the entire system.
While immersion cooling may be more effective in cooling servers in some ways-–the entire server and equipment is immersed so all parts can effectively be cooled-–traction for immersion cooling has been limited and would require specialized data centers to be built out to accommodate the cooling systems.
As more data centers are built out to accommodate high performance compute and lock into direct-to-chip cooling infrastructure, technologies like Daanaa’s PTU can potentially be used to further optimize these systems to reduce the amount of cooling required. As of now, NVIDIA’s Hopper and Blackwell GPUs range from ~40-75 kW for rack power and 700-800 W for their thermal design power. However, the newly announced Rubin Ultra and Kyber rack systems set to be released in 2027, could potentially require up to 600 kW for a single rack.
This significant jump in power consumption will require innovation and improved systems design when it comes to meeting and delivering power. As these companies look to scale and plan for the future, Daanaa’s PTU and other technologies in the power delivery sector can support these endeavors. Technologies that reduce heat-generating components on the server can reduce overall energy consumption, and technologies that optimize uninterrupted power systems (UPS) can reduce the footprint of energy management infrastructure and open up more space in the data center to house servers and racks to generate more revenue for the data center operators.
