The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof.
A secondary issue has been developing over the last several years. Even though your organization secured several racks with GPUs, how will you power them, and where will you put them?
For instance, many universities have previously placed new equipment in their campus datacenter. Many of these datacenters are now “tapped out” and have no more space or power. Estimates for the current “GPU rack” range from 50 to 100 KW (kilowatts) per rack (previous estimations for “CPU racks” are 10-17 KW per rack). If you want to co-locate 4 GPU racks, a datacenter that can provide 400 KW of power may be difficult to find.
The same applies to the smaller companies that provide HPC colocation services. They are finding that current datacenters are tapped out and must search for space and power. These companies are the non-Hyperscalers that can’t stand up an entire datacenter campus.
A recent report from JLL, a real estate investment and management firm, Data Centers 2024 Global Outlook (The report can be obtained by providing an email address), shares some interesting trends. The report explores how datacenters need to be designed, operated, and sourced to meet the evolving needs of the global economy, in particular, the power increase required for GPU-heavy GenAI clusters.
The AI-fueled growth is expected to continue into the near future. Consumers and businesses are anticipated to generate twice as much data in the next five years as all the data created over the past ten years.
In addition to the GPU computing needs, the growing demands of GenAI, datacenter storage capacity is expected to grow from 10.1 zettabytes (ZB) in 2023 to 21.0 ZB in 2027, for a five-year compound annual growth rate of 18.5%[1]. This increased storage will generate a need for more datacenters, and generative AI’s greater energy requirements – ranging from 300 to 500+ megawatts per campus – will also require more energy-efficient designs and locations. The need for more power will require datacenter operators to increase efficiency and work with local governments to find sustainable energy sources to support datacenter needs.
According to the report, AI-specialized datacenters look different than conventional facilities and may require operators to plan, design, and allocate power resources based on the type of data processed or stage of GenAI development. Regarding heat removal, the huge increase in GPUs will surpass current standards. In a typical datacenter, air-cooling typically accounts for roughly 40% of an average datacenter’s electricity use. Users, particularly hyperscalers and operators, are shifting from traditional air-based cooling methods to liquid cooling and rear-door heat exchangers. Case studies have shown that liquid cooling offers significant power reductions, as high as 90%, while improving capability and allowing more densely packed systems that, in turn, increase rack power usage.
Hyperscalers, which have been at the forefront of adopting AI and high-performance computing (HPC), have the greatest need for high-density infrastructure (See Figure). Currently, their large facilities have an estimated average density of 36kW per rack, with increased liquid cooling density and GPU hardware IDC estimates will grow at a 7.8% CAGR in the coming years to approach 50kW per rack by 2027. [2]
In an interview with Andy Cvengros, managing director of U.S. Data Center Markets for JLL, HPCwire learned many issues facing datacenter deployments.
The first and most important issue he suggests is planning. For instance, Cvengros mentioned that with power grids becoming effectively tapped out and transformers having more than three-year lead times, operators will need to innovate.
The GPU squeeze is taking pace at the datacenter level, where a small colocation deployment of 4-5 racks will have a harder time finding a datacenter because the hyperscalers are requesting entire datacenter campuses.
According to Cvengros, all major metro areas are basically tapped out, and secondary areas, like Reno, NV, or Columbus, OH, are now prime locations for new datacenter construction. The demand is expected to continue, and new datacenters are 3.5 years out. He reiterated, “Planning is key.”
He recommends working with a company that specializes in high-performance systems for smaller HPC GPU-Cluster colocation (i.e., the University that is completely tapped out of power and space). In his experience, datacenter providers must actively track usage and availability for global opportunities to deliver any near-term colocation capabilities.
The Datacenter Squeeze is a Global Problem
The JLL report also lists the critical changes needed across the globe to address increased power usage.
- In Europe, one-third of the grid infrastructure is over 40 years old, requiring an estimated €584 billion of investment by 2030 to meet the European Union’s green goals.
- In the United States, meeting energy transition goals to upgrade the grid and feed more renewable energy into the power supply will require an estimated $2 trillion.
- The rapid growth of datacenters is also putting pressure on limited energy resources in many countries. In Singapore, for example, the government enacted a moratorium to temporarily halt construction in certain regions to carefully review new datacenter proposals and ensure alignment with the country’s sustainability goals.
The global GenAI energy “request” (or demand) presents opportunities and challenges to the datacenter sector. GenAI needs power on a scale not seen in the past. Finding GPUs for HPC is only half the problem; where to plug them in may become a bigger challenge.
[1] IDC, Revelations in the Global StorageSphere, July 2023
[2] July 2023, IDC #AP50326223, Asia/Pacific (Excluding Japan) DC Deployment Model and Spend Forecast, 2H22: 2022–20a27