Visit additional Tabor Communication Publications
July 05, 2011
For the past five years, SGI's Altix ICE platform has been the company's bread and butter HPC cluster offering. That trend looks to continue as they gear up for their fifth generation design. But this iteration of Altix ICE, codenamed "Carlsbad 3," is more than just a processor and InfiniBand refresh.
According to SGI marketing VP Bill Mannel, the first four generations of Altix ICE were variations on a theme, using essentially the same blade infrastructure. As Intel or AMD came up with new processors and Mellanox rolled out faster InfiniBand parts, SGI slapped them into the Altix ICE lineup. But Carlsbad 3 has been redesigned from scratch, says Mannel.
Like many HPC blade vendors, SGI is going after the three big attributes customers are clamoring for: higher density, lower cost, and more efficient cooling. Just riding the Moore's Law curve for microprocessors gets you halfway there for the first two attributes, but better cooling required some re-engineering on SGI's part.
The previous versions of Altix ICE relied on either standard air cooling or liquid cooling in the form of water cooled doors bolted onto the enclosures. But like a lot of denser blades, SGI has added the option for a cold plate, where the liquid runs against the hottest components of the blade. That design also allow for warm water cooling, where the water temperature can run up to a tepid 30 degrees Celsius (86F), and which can be cooled via a liquid-to-air exchange rather than a power-sucking chiller. Although this is a new feature for the Altix ICE, the cooling technology is derived from the Rackable ICE Cube container that SGI offers today.
Mannel says the 30C limit for warm water cooling is showing up in more RFPs, and is becoming more accepted as datacenters, HPC or otherwise, are forced to cram more compute capacity into the same building. For supercomputing setups, this can be especially useful for customers who want to build their systems with extra-hot processors like high-bin (high GHz) x86 parts and big wattage accelerators like GPUs.
Speaking of which, the new ICE machines will sport the new Sandy Bridge EP Xeons, Intel's upcoming CPU offering for dual-socket servers. As far as accelerators go, at the very least SGI will be offering NVIDIA's latest GPUs, either the standard M2090 Tesla modules or the related X2090 designed for extra-dense blade setups. According to Mannel, the final design is being worked out this month.
And although Intel's MIC (Many Integrated Core) Knights Corner coprocessor won't be arriving until later in 2012, it's a pretty sure bet that the Altix ICE will adopt them when the Intel releases the commercial product. At the International Supercomputing Conference last month, SGI was one Intel's system partners cheerleading the MIC development effort. (Currently SGI offers a MIC development system based on the Rackable H4002 server and the Knights Ferry part.) In fact, MIC is apt to show up across multiple SGI HPC offerings next year. "We do intend to include it into our system plans going forward," says Mannel.
InfiniBand, too, got an upgrade in the Carlsbad 3 design. The default interconnect will be Mellanox FDR InfiniBand, but since SGI is using mezzanine cards for the network I/O, the customer could opt for 10GbE too. The rational behind mezzanine cards is that they are cheaper than using discrete network adapters and more flexible than hardwiring each blade with as specific interconnect by plopping the network silicon directly onto the motherboard. Also, when EDR InfiniBand goes into production, customers should be able to just swap the FDR cards with the EDR version, leaving the compute blade as is.
Also for increased flexibility, the SGI engineers went to a power shelf design. In the current ICE setup, the power supplies are inside the blade enclosure; with the new design, they will be in separate units. That allows customers to add (or subtract) power supplies as needed, as for example, when installing more blades or hooking up accelerators.
Density wise, Carlsbad 3 will offer two configurations, each more compact than the current Altix ICE 8400. In the standard configuration, the new ICE will up the blade count from 16 to 18 per 10U enclosure. That's 12 percent more compute per rack unit. And since the Sandy Bridge chips will provide up to 2 more cores per socket than the 6-core Westmere EP silicon in the 8400, the compute density gets another 33 percent boost.
Also, instead of coming in 24-inch wide racks, the engineers have squished them into standard 19-inch boxes. That will enable the new machines to more easily co-exist with vanilla datacenter gear.
The other density innovation is their M-Rack configuration. Basically it's a double density setup that is able to fit twice as many blades (36) into an enclosure by sandwiching two motherboards into a blade slot -- what they call the Gemini Twin blade. The M-Rack essentially flips two standard racks 90 degrees and pushes them together, squeezing out the space that would have been the hot aisle (and compensating with the type of water cooling discussed above.) The M-rack is nearly twice as dense as the new standard density offering and nearly 3 times as dense as the current Altix ICE 8400.
The new ICE machines are scheduled to start shipping in December and according to Mannel they already have some customers in the pipeline. Although he didn't say, one of them could be NASA, which recently upgraded their Altix ICE Pleiades supercomputer with additional hardware. That system is now in the petaflop club and sits at number 7 on the TOP500 list.
SGI's biggest supercomputer to date, Pleiades represents a microcosm Altix ICE history. The system was originally installed in 2008 and has been upgraded on a continuous basis ever since. Through the aggregation of Altix ICE hardware of various generations, it now contains a mixture Harpertown, Nehalem and Westmere Xeon processors and a combination of DDR and QDR InfiniBand networks.
Pleiades may end up as a 10 petaflop system as early as next year, as was the original intention of NASA. If so, that is almost certain to be accomplished with the upcoming Carlsbad 3 blades, most likely souped up with accelerators. And given SGI's and NASA's penchant for Intel silicon, those accelerators could very well be MIC coprocessors.
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.