Commentary: HP wades into warm water to expand computing market

LAS VEGAS — At Hewlett Packard’s Discover event here, the company had plenty of news about new data center technologies for servers, networking, storage and services.

But I believe one of the more interesting announcements was the Apollo 8000 System for high-performance computing (HPC). It is interesting in that it doesn’t use air for cooling, but a warm water system. While HP has the highest market share in HPC, they are clearly going after IBM’s and Cray’s HPC business, and additionally, trying to make HPC more palatable for large businesses.

HPC is most commonly used by researchers, academics, government entities and energy companies for doing brainy things like mapping genomes, doing weather simulations, finding oil and defense. A growing segment of HPC are even using systems to design electronics, Monte Carlo financial simulations, and even to design cars. This biggest technology issues for HPC is getting the raw performance needed at the right energy levels, the right data center space and at the right cost levels.

To pack more servers in a smaller space, called server density, many HPC facilities build custom systems that actually flows cold water inside the server across everything that gets hot, like processors, memory, power supplies and storage to cool it. Water is 1,000 times in cooling than fans and air, but the water needs to be refrigerated and flowed through the data center.

Sound expensive? It is very expensive. Sounds risky? Yes, very risky. Remember the last time you dropped your phone into water? Imagine springing a water leak inside hardware costing hundreds of millions of dollars.

The Apollo 8000 uses what HP calls a “dry-disconnect” water system. The water never actually enters the server chassis or gets near the processor, memory, storage or power supplies as it flows outside the chassis. Copper heat pipes then carry the “cool” to all the hot components

Again, this kind of technology is intended to pack even more performance into a smaller amount of space which has many ancillary benefits. HP rolled out NREL (National Renewable Energy Lab) on stage where they reported they use the excess heat from their Apollo 8000 during the winter months to heat buildings and melt snow on sidewalks.

Steve Hammond, director of the NREL’s Computational Science Center, says “leveraging the efficiency of HP Apollo 8000, we expect to save $800,000 in operating expenses per year, because we are capturing and using waste heat.”

“We estimate we will save another $200,000 that would otherwise be used to heat the building,” Hammond says. “We are saving $1 million per year in operations costs for a data center that cost less to build than a typical data center.”

That’s cool, literally.

What’s really interesting is what this could mean to the future of the HPC market. If these closed water systems like the Apollo 8000 can prove out their density, throughput and reliability, I would expect more classic enterprises to adopt more HPC workloads on their own without having to build a cost-prohibitive data center.

This, in turn, turns the niche HPC market into a much larger, more mainstream market.