Intel Rolls Out New Server CPUs

By Michael Feldman

May 14, 2012

Intel Corp. has launched three new families of Xeon processors, joining the Xeon E5-2600 series the chipmaker introduced in March. These latest chips span the entire market for the Xeon line, from four- and two-socket servers, down to entry-level workstations and microservers. A number of HPC server makers, including SGI, Dell, and Appro announced updated hardware based on the new silicon.

The newest Xeon of greatest interest to high performance computing is the Sandy Bridge E5-4600 series, which is built for four-socket servers. At the CPU level, the E5-4600 is more or less identical to the E5-2600 for two-socket systems, both of which are available in 4-, 6-, and 8-core flavors, support 4 memory channels, include 40 lanes of integrated PCIe 3.0, and come with up to 20 MB of last level cache. The four-socket E5-4600 can support twice as much memory per system (up to 1.5 TB) as its two-socket counterpart, but that just serves to keep the per processor and per core memory ratio in line.

In normal times, the new four-socket Xeon would simply take the place of the older technology, in this case the Xeon E7 (“Westmere-EX”), but Intel has moved the new chip into a somewhat different role. According to Michele Fisher, a senior product marketing engineer at Intel, the E5-4600 is intended to complement the E7, rather than replace it. Specifically, the Sandy Bridge version is a “cost and density optimized” CPU for four-socket servers, which in this case is reflected in less cores (maxing out at 8 instead of 10 on the Westmere-EX), a lower memory capacity (1.5 TB instead of 2.0 TB), and less RAS support. It’s also less expensive. The price range on the new four-socket Xeons is $551 to $3,616; on the older Westmere E7 chips, it’s $774 to $4,616.

The idea, says Fisher, is to target the new four-socket CPUs for dense, scale-out systems in domains like HPC and telco, and to support growing geographies like China, which are especially cost-conscious. And because of their density and better energy efficiency, the new CPUs are especially suitable for four-socket blade servers. The older E7 chips will continue to be sold into more traditional enterprise systems, in particular, high-end transactional database machines, where the larger memory footprint and high reliability features are most appreciated.

Since the E5-4600 supports the Advanced Vector Extensions (AVX), courtesy of the Sandy Bridge microarchitecture, the new chip can do floating point operations at twice the clip of its pre-AVX predecessors. According to Intel, a four-socket server outfitted with E5-4650 CPUs can deliver 602 gigaflops on Linpack, which is nearly twice the flops that can be achieved with the top-of the-line E7 technology. That makes this chip a fairly obvious replacement for the E7 when the application domain is scientific computing.

Which explains why SGI is upgrading its Altix UV shared memory supercomputing platform from the E7 to the E5-4600. Also, since the UV has SGI’s custom NUMAlink interconnect and node controller, that system can scale well beyond the four sockets and 1.5 TB of cache coherent memory based on the native Intel chipset.

In fact, SGI’s new Sandy Bridge-based UV can scale up to 4,096 cores and 64 TB of memory in a single system. That’s twice the number of cores and four times the memory of the older Westmere-based UV. And because of the chip’s AVX support, peak flops per UV rack has doubled, from 5.4 to 11 teraflops.

SGI has already sold one of its new UVs to the COSMOS Consortium, a group that uses HPC to support origin-of-the-universe type research associated with Stephen Hawking’s cosmology work. Some of the simulations are designed to reveal the nature of the universe immediately after — as in one second after — the Big Bang. The computer will also support other cosmology research, including searching for planets outside our solar system.

Dell is also using the E5-4600, but in more conventional HPC gear. It’s putting the new Xeon into its four-socket PowerEdge M820 and R820, a blade and rackmount server, respectively. The M820 can house up to 10 full-height blades in 10U chassis, while the half-as-dense rackmount R820 puts a single four-socket server into a 2U box.

A couple steps down performance-wise from the E5-4600 is Intel’s new Sandy Bridge E5-2400, aimed at lower-end two socket servers. It’s designed to be a more energy-efficient alternative to the original two-socket E5-2600. It’s also considerably cheaper, with a price range of $188 to $1,440.

The E5-2400 series spans the same core counts as E5-2600, but gets by with one less memory channel (3), fewer PCIe lanes (24), and maxes out at half the memory (384 GB) of its older sibling. More importantly, they tend to be slower chips; the top-end E5-2440 is nearly full gigahertz slower (2.4 GHz) than the fastest E5-2600. But that translates into less power draw — from 60 watts on the low end part, up to 95 watts at the top end.

Their energy efficiency and cost make them suitable for scale-out clusters that don’t require a lot of single-threaded horsepower. Dell, for example, is using the E5-2400 processors in their new M420 blade, which is being positioned for some HPC-type workloads, especially animation and CGI rendering. The M420 is the first quarter-height dual-socket blade in the market; 32 of the mini-blades (1024 cores) can be squeezed into a 10U chassis. As with the four-socket gear, Dell is also offering a rackmount counterpart, the R420.

SGI is using the E5-2400 CPU as the base processor for its the Hadoop clusters, as well as in its Rackable server line for more general enterprise duty. For many Hadoop applications, which tend to be bound by data movement, rather than raw computational muscle, this chip could be a nice fit. And even though it’s slower than the mainline E5-2600 chips, SGI is still promising 22 percent better price-performance and 27 percent better performance/watt than the corresponding Westmere EP-based Hadoop gear.

The third new Xeon is the one-socket E3-1200 v2, a 22nm Ivy Bridge CPU for entry-level servers and workstations. Offered in dual-core and quad-core configurations, prices range from $189 to $884. The fastest part, at 3.7 GHz, offers quite respectable performance, but with only 8 MB of cache and a maximum memory capacity of 32 GB, the chip might be a bit of a stretch for HPC duty.

The family also includes two interesting new CPUs aimed at the microserver market, including Intel’s lowest powered Xeon, the E3-1220L v2. With a TDP of just 17 watts, that’s approaching ARM CPU territory. For example, Calexda makes a quad-core ARM chip for microservers that draws 5 watts, but that’s a 32-bit CPU, which limits its application in the server room rather substantially. The 64-bit E3 Xeon would have no such problem.

Intel is not positioning these new microserver Xeons for high performance computing; ostensibly they’re targeted for front-end web workloads, content delivery, and dedicated hosting. However, some creative server maker might be able to design a nifty little one-socket box with the E3-1220L v2 that could be used for some types of embarrassingly parallel codes. But since Intel would much rather sell its higher end E5 Xeons to its HPC customers, we’re not likely to see a Xeon-based microservers in supercomputers anytime soon.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire