Hewlett Package Enterprise | ComnetCo White Paper

Don’t let your AI factory lose its cool

Discover how ComnetCo and HPE can help you deploy Direct Liquid Cooling

Everyone knows that the demand for AI has gone bonkers. Plus, foundation models have ballooned from millions to trillions of parameters, which requires chips with unfathomable density for training. The latest AI-tuned GPUs and accelerators incorporate hundreds of billions or even trillions of transistors. And as a slew of models graduate from training, satisfying demand for inference as users ask for real-world answers means more chips devouring electricity. Server racks now can consume 120 kilowatts or more, which generates unprecedented heat. All this makes cooling one of the hottest topics in High Performance Computing and AI.

Fortunately, there’s a very cool solution. Unlike conventional air-cooling, which maintains low temperatures of server components by using fans and computer room air conditioners, Direct Liquid Cooling (DLC) relies on the superior thermal properties of liquids for more efficient heat transfer. Water has as much as 3,000 times the heat absorption capacity of air.

The benefits provided by this greater thermal efficiency include:

You will discover:

  • Higher density racks and more processing power in the same footprint because it removes more heat.
  • Fewer fans and smaller mechanical components mean less noise and simpler maintenance.
  • Extended life and improved reliability of costly GPUs and CPUs.
  • A greener AI data center because it can reduce energy consumption by up to 25%.
  • DLC and Direct-to-Chip Liquid Cooling (DCLC) work by circulating a mix of water and fluid similar to automobile antifreeze into a server. When piped into an AI system, this chilled fluid cools plates that extract heat directly from all critical components in the server.