New Mexico-based technology firm Aquila is announcing the first OCP-inspired server rack to use fixed cold plate liquid cooling technology. Based on the Facebook-initiated Open Compute Project (OCP) standard, the Aquarius rack integrates patented third-generation cooling technology designed by Clustered Systems. The platform supports up to 108 Xeon servers per rack and will target high density HPC and hyperscale computing applications.
The OCP width enables three Intel Kennedy Pass server boards to be placed under each of the cold plates. 36 servers fit in a 12 OU modular insert and up to three inserts can be stacked in a 48 OU OCP rack for a total 108 servers. Fully-outfitted with server nodes, the Aquarius rack still has 3U available for top-of-rack switching. The other HPC-focused OCP rack design on the market, Tundra-ES from Penguin Computing, supports up to 96 servers in a rack.
There’s also room in each node for two 2.5-inch hard drives or SSDs to support virtualized and hybridized storage approaches.
A future Aquarius product will hold up to 72 Adams Pass Knights Landing server nodes. In this configuration, two of the slightly-wider Adams Pass boards fit under each cold plate for a total of 24 per insert (x3 per rack). Stuffed with the top-bin KNLs (the 7290s), four Aquarius racks can support a theoretical peak performance of just under 1 petaflops.
The liquid cooling design from Clustered Systems uses cold plate technology to cool the entire server: the CPUs, the DIMMs and any components that consume more than 2 watts. Founder and CEO of Clustered Phil Hughes explained the heat from the lower power components (< 2 watts) is dissipated through the board and collected by the cold plate by a variety of means, namely radiation, conduction and convection.
“With Asetek and CoolIT, what they do is put little blocks on top of the CPUs and pass water through those individual blocks so it can only cool the CPU and nothing else, so you’ve only got half a solution,” said Hughes.
Hughes referenced another vendor that employs a single cooling array that fits over two CPUs. Using lids that were not fully coplanar resulted in insufficient thermal contact to cool both CPUs, said Hughes.
“We solved that problem several years ago by the invention of a highly-compliant highly-conductive thermal interface, which when it’s pressed on top of the blocks it flows so there’s a very good thermal contact between the blocks that are sitting on the CPUs, the tops of the DIMMs and the other components and the cold plate itself,” said Hughes. “We’re able to cool everything and to do it in such a way that it is very easy to remove a server and service it and push it back in again without having to disconnect tubes and deal with leaks and so on.”
The cold plate itself is hard-soldered into the rack to an input and output manifold. There’s two large pipes going up the back and some very small pipes going into each cold plate so it doesn’t get in the way of wiring, Hughes explained.
The liquid cooling system uses 30 degree Celcius ASHRAE-spec water piped directly into the chassis. This eliminates the need for a coolant distribution unit. There’s also no need for fans, so no fan vibration.
Said Hughes, “The fact that we’ve removed all the fans and individual power supplies and we’ve eliminated all those electro-mechanical parts that are prone to failure in a large scaled-out system – it’s generally going to improve reliability. We’re also keeping very steady control of the thermal junction temperatures of the semiconductor, which also contributes to long-term reliability.
“We ran an earlier system at SLAC National Accelerator Laboratory for almost two years accumulating about two million server hours with zero failures on any component in the system. That’s the sort of the thing you can expect when you get rid of all those fan vibrations, plus a more peaceful environment,” Hughes added.
The partners say another advantage is the ability to operate continuously in server mode. “We confirmed here in New Mexico, under a full rack load we can run Turbo full-duty cycle 100 percent of the time and not get into the thermal shutdown conditions,” said Bob Bolz, head of Aquila’s HPC and datacenter business development. “We never even get close to the thermal limits where the microprocessor starts to throttle itself down. The implication is that with a lower bin cheaper CPU you can boost your performance to 100 percent Turbo duty cycle and perhaps get 15-25 percent more performance out of the same cheaper bin.”
They claim to have been able to get a teraflops (LINPACK) out of the system per board using dual-socket E5-2697 WS v4s with 64 Gig of RAM running Turbo, without tuning.
Aquila and Clustered report that all these efficiencies pay off in terms of ROI and TCO.
The TCO for a typical datacenter runs about $5-$10 million dollars per megawatt. With a fully liquid-cooled system, the figure reduces to around $3 million or less per megawatt, according to Hughes, owing to the reduction in infrastructure equipment and space requirements as well as energy savings.
“The rack may be more expensive on a per rack basis, but because of our density and the fact that our cooling efficiency will cut the power bill drastically, close to 50 percent, you recover the cost of the equipment well within the first year’s operation of your datacenter,” Bolz added.
Aquila and Clustered provided this diagram to illustrate the TCO equation.
Annual ammortization comparison
The solution is capable of cooling over 100 kilowatts per rack. Stuffed to capacity with 108 Xeon dual-socket E5 v4 servers, the load is about 50 kilowatts, but the cold plates themselves are capable enough to handle that higher figure, Hughes asserted. “No boards right now are going to put out that type of power, but we are ready for it when it comes,” he said.
Aquila is an employee-owned small business, located in New Mexico, that started out in HPC. “We were the original Intel rep in the area – and at the time microprocessors were just starting to get going in the 70s and 80s,” Bolz shared. “We called in advanced computing at that point.”
The company moved into manufacturing, and had a long stretch filling specialized products for the Department of Defense and Department of Energy. They are still manufacturing high-end radiation detection equipment for tracking radiological materials with the potential to be used by bad actors.
Development on Aquarius began in 2015 when Aquila and Clustered Systems responded to the Trilab CTS-1 procurement, which was ultimately awarded to Penguin Computing.
“We have a long manufacturing story – we are very entrepreneurial – we got a call from Clustered when the CTS-1 proposal came up, and we put a competitive bid. We believe we came in second to Penguin; based on our quote, we believe by a couple million dollars. They encouraged us to move forward with designing the product and finishing off to a production model – at that point Aquila struck an agreement with Clustered,” said Bolz.
Given how they started out their partnership and their ties to the New Mexico labs (Los Alamos and Sandia), it makes sense that the the companies will be focusing their go-to-market strategy on government HPC, but they are also looking toward hyperscale market opportunities.
“The HPC community is quick to adopt new things, anything that will make for more reliability and high performance,” Bolz observed. “I think because of the scale involved in HPC, they are willing to take more risk than you would see in other markets. However we’re beginning to see the hyperscale, hyperconverged datacenters as a good market for this as well.
“If we send them an open rack with servers in it they don’t have a problem with that—they are used to very open type equipment, so we don’t see that as a problem. But for the high-performance computing folks of course we’re going to have sides and doors on our racks too to because what they are used to seeing has more of a finished feel to it – so the good part is we are suitable for both of those markets.”
Exascale also factors prominently into Aquila and Clustered System’s vision. The partners design goal was to reduce the cost of cooling server resources to under 5 percent of overall datacenter usage.
“As we move from a four teraflops board to an eight or a ten teraflops board, even if they double the heat on it,” said Bolz, “I think we’ll have the economics from both the reliability standpoint and power economy standpoint to make exascale a lot more palatable. That’s really where we see the next generation of systems that we’ll come out with looking out five years.”
Aquila is taking orders now and production systems will begin shipping this quarter. You can check out the Aquarius platform in person at SC16 in Salt Lake City, Utah, in November.