Visit additional Tabor Communication Publications
June 14, 2012
The sequel to SGI's UV supercomputer has arrived. Dubbed UV 2, the new platform doubles the number of cores and quadruples the memory that can be supported under a single system. The product, which will be officially announced next week at the International Supercomputing Conference in Hamburg, represents the first major revision of SGI's original UV, which the company debuted in 2009.
The UV's claim to fame is its ability to support "big memory" applications, whose datasets can stretch into the multiple-terabyte realm. Since the architecture supports large amounts of global shared memory, applications don't have to slice their data into chunks to be distributed and processed across multiple server nodes, as would be the case for compute clusters. Thanks to the SGI's NUMAlink interconnect, UV is able to glue together hundreds of CPUs and make them behave as a single manycore system with gobs of memory. Essentially, you can treat the machine as an ultra-scale Linux PC.
The new UV 2 takes this to another level. While the original UV could scale up to 2,048 cores and 16 TB of memory on a single system, UV 2 doubles the max core count to 4,096 and quadruples the memory capacity to 64 TB. Even in the era of big data, that encompasses a lot of applications, at least those that don't rely on Web-sized datasets.
Even with the lesser memory limits of the first generation UV, the supercomputer has worked its way into application niches across the data-intensive spectrum, primarily in technical computing, but a few on the business side as well. UV has had particular success in areas like life sciences and manufacturing, where the HPC cluster/MPI application paradigm never became fully entrenched. At lot of these applications had their origins on PCs or workstations, so the step up to a single system image UV was a natural one once those users exhausted RAM on the desktop.
The platform has also found application uptake in chemistry, physics (especially astrophysics), defense and intelligence, and research areas like social media analytics. Even business analytics applications like fraud detection are fair game. An example of the latter is a world-wide courier service that is employing a UV machine to detect fraudulent activity in real-time.
To crank up the performance and scalability on this second-generation machine, a lot of the UV parts had to be upgraded, starting with a new CPU. On that front, the UV 2 engineers opted for the latest Intel "Sandy Bridge" Xeon E5-4600 family chips, which replace the Nehalem EX and Westmere EX CPUs offered in the first UV. A fully loaded UV 2 rack with 64 CPUs can now deliver 11 peak teraflops, which is nearly twice the flops of the original Nehalem-based machine.
Conveniently, the Sandy Bridge processor provides an extra couple of address bits, which is what makes the 64 TB memory reach possible. (ScaleMP's virtual SMP technology also enables a 64 TB memory reach, in this case on Sandy Bridge-based clusters, but does so without the performance benefit of a custom interconnect.) The new CPU also incorporates native support for PCIe Gen 3, basically doubling I/O bandwidth to storage and other external devices.
Speaking of which, UV is able to hook into multiple accelerators, both NVIDIA GPUs and Intel MIC, via a PCIe-based external chassis. Up to 8 GPUs and some unknown number of MIC coprocessors can be linked to a system in this way. At least one customer, the UK's Computational Cosmology Consortium (COSMOS), is in line to get a MIC-accelerated UV 2.
Aside from the CPU, the other big UV 2 upgrade is NUMAlink 6, the next generation of SGI's custom system interconnect. NUMAlink makes memory coherency across the UV blades possible; without this special chip, an E5-4600 system would max out at a mere 32 cores and 1.5 TB of memory. Besides adding support for the new E5 CPU, the interconnect also reduces the cabling requirements, while more than doubling the data rate of the previous generation NUMAlink 5, a pretty speedy interconnect in its own right.
"Even a nicely configured InfiniBand cluster really pales in comparison, in terms of system bandwidth that we can deliver," says Jill Matzke, director of server marketing at SGI.
But according to her, it's the improved memory capacity that is going to be the real draw here. "While the ability to scale more cores is interesting," she says, "we think the ability to scale memory is going to be the most important driver for customer uptake and deployment of this technology."
Product-wise UV 2 will be offered in two incarnations, the UV 20 and the UV 2000. The former is a 4-way rackmount server that tops out at 32 cores and 1.5 TB -- the same upper limit you would find in standard server based on E5-4600 parts. The UV 2000 is the one that can scale all the way up.
Not that you need to buy thousands of cores and terabytes of RAM right off. UV 2000 customers can start with just 16 cores and 32 GB of memory and slip more blades into the enclosure as budget allows. With lower bin CPUs, that 16-core entry point system is just $30,000 and according to Matzke, the price increases more or less linearly as you fill the rack with additional CPUs and RAM. Once you get beyond a single rack, the cost of extra cabling and rack-top routers gets factored in.
But even just four racks can get you all the way to 64 terabytes, so there's not a lot of hardware infrastructure involved. Remember this is not a machine built to max out flops. As with the original UV, the idea here is to offer a lots of shared memory in an affordable package -- at least relative to "big iron mainframes. And while the UV may be more expensive than a flash-based system with a comparable memory footprint, SGI is claiming much better price-performance when data bandwidth and latency are taken into account.
If 64 TB of memory doesn't quite do it for you, SGI lets you lash together multiple systems if you're looking for a cluster of fat nodes. The maximum configuration in this case is 16K sockets and 8 petabytes of memory.
The UV 20 and UV 2000 are available for shipping now. And if you happen to be in Hamburg Germany next week, the technology will be on display in SGI's booth at the International Supercomputer Conference.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.