SGI Colors New Shared Memory Machines Ultraviolet

By Michael Feldman

November 16, 2009

After what may be the longest development cycle ever for a supercomputer, SGI has unveiled the first commercial implementation of its Ultraviolet architecture. The company first announced “Project Ultraviolet” at SC03. Now six years later, it has launched Altix UV, the company’s first scale-up HPC system based on x86 technology. The Altix UV’s connection to the 2003 design is tenuous at best, but the new architecture does fulfill Ultraviolet’s original promise of delivering a shared memory architecture able to scale from a few sockets all the way up to a petascale supercomputer.

SGI Altix UV

Besides being simpler to program than distributed memory clusters, shared memory systems are especially well suited to I/O bound and memory-bound applications; codes that depend upon a lot of inter-processor communication; and any type of application that uses large — as in terabyte-sized — in-memory databases. These shared memory systems can also be used in conjunction with large clusters to provide an “analysis supernode.”

The two initial products, the Altix UV 1000 and Altix UV 100, are both based on Intel Nehalem-equipped blades, which are hooked together with SGI’s 5th generation NUMAlink fabric. The software stack includes everything from the OS on up, including the SGI Foundation Software, data management packages (XFS, CXFS, DMF), SGI’s ProPack and System Management tools, job schedulers (Altair PBSP and Moab) and developer tools and libraries. The machines come with either SUSE Linux Enterprise Server or Red Hat Enterprise Linux.

The blades themselves contain two eight-core Nehalem EX chips, each with a bank of four DDR3 memory channels. If a larger memory to core ratio is desired, there are 6- and 4-core options, as well as a single-socket configuration. An optional I/O riser allows for a choice of expansion slots or external I/O ports. Up to two PCIe slots are available on each blade and these can be used to plug in external storage (SGI or otherwise) or GPGPUs.

SGI’s secret sauce is the UV hub, which sits on each blade and acts as the node controller. The hub, along with the NUMAlink 5 interconnect, is the technology that makes the supersized shared memory possible. The new interconnect delivers sub-microsecond latencies and 15.0 GB/sec of aggregate bandwidth per blade. The hub itself manages data traffic between the local CPU resources and the rest of the system, arbitrating between the local QuickPath Interconnect (QPI) links and the NUMAlink fabric.

According to Jill Matzke, Altix product manager, the SGI engineers decided to limit themselves to two sockets per blade in order to avoid overtaxing the QPI bandwidth, which needs to feed the NUMAlink fabric and I/O. Since Nehalem EX is designed to support up to 8 sockets per board, one might wonder why SGI didn’t opt for the dual-socket-capable Nehalem EP chips. Apparently, EX was chosen because it offered more QPI and memory bandwidth, both of which were essential to the UV design. In any case, the Nehalem EP design does not lend itself to external node controllers, such as the UV hub.

The Altix UV 100 is aimed at the mid-range market, scaling from a single 3U rackmount unit containing two dual-socket blades, up to a 7 teraflop, 96-socket machine that fits into a couple of racks. The upper limit on memory capacity on this product is 6 TB. The UV 100 is aimed at users who need a moderate to large SMP environment for their x86 applications. At the maximum 96-socket configuration, 768 cores are available, which doubles to 1,536 threads thanks to Nehalem-style multithreading support.

The Altix UV 1000 is a cabinet solution that scales all the way to the top, that is, 256 sockets (yielding 2,048 cores or 4,096 threads) and 16 TB of memory. At the max configuration, this model delivers 18.6 peak teraflops in a 42U space. The 16 TB limit on the UV 1000 corresponds to the maximum memory reach of the Intel Nehalem processor. However, the UV 1000 design can actually scale beyond this limit by connecting multiple 256-socket systems in a 2-D torus topology. In this case, the system would be partitioned with multiple OS images but support a much larger shared global address space — up into petabytes. The upper limit supported by the UV hub is 32,768 sockets, which would equate to about 2 petaflops. SGI is certainly willing to help interested parties develop such systems, but the vast majority of customers will be able to fit their applications within the 256-socket, single system image machine.

Note the current Itanium-based Altix 4700 reaches to 128 GB because that CPU’s memory address is wider, although core count on those systems tops out at 1024. That said, just getting a handful of terabytes of global memory on an x86 platform is likely to be a big attraction for HPC users. “We are seeing people ordering many more terabytes of memory on UV than they ever did on Altix with Itanium, simply because of the overall capability and the price-performance,” says Matzke.

Although UV supports highly-scaled applications in a global memory model, today the majority of global memory applications scale to just 32 or maybe 64 threads. However, UV, like most shared memory machines, can also deliver great performance for MPI applications by properly exploiting the unified memory and the speed of the interconnect fabric. Moreover, an MPI offload engine has been incorporated into the UV hub to further accelerate this class of applications. SGI has demonstrated a 3X improvement in the HPCC GUPS benchmark with the offload engine enabled. According to Matzke, “70 percent of the people that buy these systems are running MPI, but have other application demands that make it really shine on this kind of an architecture.”

According to Geoffrey Noer, SGI’s senior director of product marketing, the company is currently taking orders for the new UV machines, with the first shipments expected by second quarter of 2010 (following Intel’s release of the Nehalem EX CPUs). Initial customers include the University of Tennessee (1024 cores, 4 TB memory), the North German Supercomputing Alliance, known as HLRN (two systems, 4,352 cores, 18 TB memory), CALMIP in France (128 cores, 1 TB memory), and the University of Hokkaido (180 cores, 360 GB memory). A number of UV systems have also been purchased by the federal government for certain “defense applications” (which shall remain nameless). SGI is not making UV pricing public, but potential buyers can always obtain a quote under NDA.

Although many customers using Itanium Altix systems will undoubtedly transition to the x86 UV platform, Noer says SGI will continue to offer the Altix 450 and 4700 systems. And even though they are not publicly divulging specific plans for future Itanium-based shared memory machines, Noer did have this to offer: “It’s important not to look at Altix UV as a direct replacement for the 4700…. We are working with Intel on next-generation processor technologies as well.  For those customers that are getting the benefits out of the larger address space and benefits with the 4700, they absolutely don’t need to switch to Altix UV.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

TACC Supercomputing Powers Climate Modeling for Fisheries

January 28, 2023

A tremendous portion of the world depends on the output of the oceans’ major fisheries, which have, in recent decades, found themselves under near-constant threat from mismanagement (e.g. overfishing). Climate change, Read more…

PFAS Regulations, 3M Exit to Impact Two-Phase Cooling in HPC

January 27, 2023

Per- and polyfluoroalkyl substances (PFAS), known as “forever chemicals,” pose a number of health risks to humans, with more suspected but not yet confirmed – and, as a result, PFAS are coming under increasing regu Read more…

Sweden Plans Expansion for Nvidia-Powered Berzelius Supercomputer

January 26, 2023

The Atos-built, Nvidia SuperPod-based Berzelius supercomputer – housed in and operated by Sweden’s Linköping-based National Supercomputer Centre (NSC) – is already no slouch. But now, Nvidia and NSC have announced Read more…

Multiverse, Pasqal, and Crédit Agricole Tout Progress Using Quantum Computing in FS

January 26, 2023

Europe-based quantum computing pioneers Multiverse Computing and Pasqal, and global bank Crédit Agricole CIB today announced successful conclusion of a 1.5-year POC study “to evaluate the contribution of an algorithmi Read more…

Critics Don’t Want Politicians Deciding the Future of Semiconductors

January 26, 2023

The future of the semiconductor industry was partially being decided last week by a mix of politicians, policy hawks and chip industry executives jockeying for influence at the World Economic Forum. Intel CEO Pat Gels Read more…

AWS Solution Channel

Shutterstock_1687123447

Numerix Scales HPC Workloads for Price and Risk Modeling Using AWS Batch

  • 180x improvement in analytics performance
  • Enhanced risk management
  • Decreased bottlenecks in analytics
  • Unlocked near-real-time analytics
  • Scaled financial analytics

Overview

Numerix, a financial technology company, needed to find a way to scale its high performance computing (HPC) solution as client portfolios ballooned in size. Read more…

Microsoft/NVIDIA Solution Channel

Shutterstock 1453953692

Microsoft and NVIDIA Experts Talk AI Infrastructure

As AI emerges as a crucial tool in so many sectors, it’s clear that the need for optimized AI infrastructure is growing. Going beyond just GPU-based clusters, cloud infrastructure that provides low-latency, high-bandwidth interconnects and high-performance storage can help organizations handle AI workloads more efficiently and produce faster results. Read more…

Riken Plans ‘Virtual Fugaku’ on AWS

January 26, 2023

The development of a national flagship supercomputer aimed at exascale computing continues to be a heated competition, especially in the United States, the European Union, China, and Japan. What is the value to be gained Read more…

PFAS Regulations, 3M Exit to Impact Two-Phase Cooling in HPC

January 27, 2023

Per- and polyfluoroalkyl substances (PFAS), known as “forever chemicals,” pose a number of health risks to humans, with more suspected but not yet confirmed Read more…

Critics Don’t Want Politicians Deciding the Future of Semiconductors

January 26, 2023

The future of the semiconductor industry was partially being decided last week by a mix of politicians, policy hawks and chip industry executives jockeying for Read more…

Riken Plans ‘Virtual Fugaku’ on AWS

January 26, 2023

The development of a national flagship supercomputer aimed at exascale computing continues to be a heated competition, especially in the United States, the Euro Read more…

Shutterstock 1134313550

Semiconductor Companies Create Building Block for Chiplet Design

January 24, 2023

Intel's CEO Pat Gelsinger last week made a grand proclamation that chips will be for the next few decades what oil and gas was to the world over the last 50 years. While that remains to be seen, two technology associations are joining hands to develop building blocks to stabilize the development of future chip designs. The goal of the standard is to set the stage for a thriving marketplace that fuels... Read more…

Royalty-free stock photo ID: 1572060865

Fujitsu Study Says Quantum Decryption Threat Still Distant

January 23, 2023

Global computer and chip manufacturer Fujitsu today reported that a new study performed on its 39-qubit quantum simulator suggests it will remain difficult for Read more…

At ORNL, Jeff Smith Becomes Interim Director, as Search for Permanent Lab Chief Continues

January 20, 2023

UT-Battelle, which manages Oak Ridge National Laboratory (ORNL) for the U.S. Department of Energy, has appointed Jeff Smith as interim director for the lab as t Read more…

Top HPC Players Creating New Security Architecture Amid Neglect

January 20, 2023

Security of high-performance computers is being neglected in the pursuit of horsepower, and there are concerns that the ignorance may be costly if safeguards ar Read more…

Ohio Supercomputer Center Debuts ‘Ascend’ GPU Cluster

January 19, 2023

Less than 10 months after it was announced, the Columbus-based Ohio Supercomputer Center (OSC) has debuted its Dell-built GPU cluster, “Ascend.” Designed to Read more…

Leading Solution Providers

Contributors

SC22 Booth Videos

AMD @ SC22
Altair @ SC22
AWS @ SC22
Ayar Labs @ SC22
CoolIT @ SC22
Cornelis Networks @ SC22
DDN @ SC22
Dell Technologies @ SC22
HPE @ SC22
Intel @ SC22
Intelligent Light @ SC22
Lancium @ SC22
Lenovo @ SC22
Microsoft and NVIDIA @ SC22
One Stop Systems @ SC22
Penguin Solutions @ SC22
QCT @ SC22
Supermicro @ SC22
Tuxera @ SC22
Tyan Computer @ SC22
  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire