There are many factors driving the need for this revolutionary technology:
1) Data center efficiency gains have stalled since 2018
2) Chip power / IT power densities are increasing rapidly
3) Data center water use has surpassed energy use as an environmental concern
4) Compute power is moving toward the edge and compacting
5) Data center e-waste is a growing problem
6) Billions are being invested in corporate sustainability initiatives
The largest single factor and the one that must be addressed is the fact that Chip power / IT power densities are increasing rapidly. While Moore’s Law long ago established that processor speed would double every eighteen months, the speed of AI processing now doubles every three and a half months. Handling these speeds requires the most powerful chips ever designed, and these chips generate massive amounts of heat, which cannot be effectively or efficiently cooled with air.
For example, in April 2021 Cerebras released its new WSE 2 chip, which boasts 2.6 trillion transistors and 850,000 AI-optimized cores and draws 23 kW of power. Most air-cooling systems in data centers can only handle about 8kW to 12kW per rack, so even though you could fit three WSE 2 chips in a rack, you might not be able to blow enough air through the rack to cool even one of them. Even if you miraculously achieved an air-cooling solution, with AI power doubling every quarter, this approach still wouldn’t be sustainable for long.