Intel Scales Up Cores and Memory with New Westmere EX CPUs

By Michael Feldman

April 6, 2011

This week Intel launched its new Westmere EX lineup, the latest Xeons aimed at large-memory, multi-socketed servers. The new chips come in 6-, 8- and 10-core flavors and will be sold under the name Xeon E7. According to Intel, these latest CPUs deliver 40 percent greater performance than the previous generation Nehalem EX (Xeon 7500 and 6500) processors while maintaining the same power draw.

Compared to the 45nm-based Nehalem EX line, the E7 silicon is on 32nm process technology, which allowed them to add a couple of more cores and an additional 6 MB of L3 cache to the top-end chip. Despite that, Intel only grew the transistor count modestly, from 2.3 billion to 2.6 billion. The thrust was to make the cores smarter and more efficient at their job, not to rely on the brute force of Moore’s Law.

The E7s are 42 percent quicker than their Nehalem ancestors, at least at integer throughput (using the SPECint_rate_base2006 benchmark). One might wonder how Intel accomplished this since they only increased the core count and L3 cache by 25 percent apiece. Apparently 11 percent of the performance increase is the result of optimizations in the latest Intel Compiler XE2011. The rest of the performance bump can probably be attributed to the faster clock for the E7. (Intel pitted a 2.4 GHz E7-4870 against a 2.26 GHz Nehalem X7560 in their benchmark tests.) Floating point throughput (SPECfp_rate_base2006) increased at a more modest 32 percent.

Using the OpenMP benchmark (SPEC OMP2001) for shared memory throughput, the E7-4870 only delivered an 18 percent boost compared to the Nehalem X7560. On some real-life memory-intensive HPC workloads, however, performance was on par with the integer and FP results. For example, Intel reported that throughput improved 21 to 37 percent when exercising the E7s on a number of EDA analysis tools. It remains to be seen how other big memory HPC codes fare on the new hardware.

Besides the core count bump, the other notable E7 feature is its support for larger memory capacity. For a four-socket server, the E7 will scale up to 2 GB of RAM and 102 GB/second of bandwidth, which is twice as good as Nehalem EX. Intel accomplished this by adding support for 32 GB DIMM chips. (The E7 still relies on the same 16 DIMM slots per socket.) These 32 GB DIMMs tend to be rather expensive, though, and so far the server OEMs are only offering E7 systems with 16 GB DIMMs. But 1 TB in a four-socket box is quite useful in its own right, and will be able to handle some rather large in-memory databases.

Perhaps more importantly, the E7 chips can be paired with low voltage memory modules (LV DIMMs) to help curb energy consumption, especially on terascale-sized DRAM configurations. Intel has also added integrated memory buffers to further reduce power draw.

Unlike the Nehalem EX line, the E7 family is divided into three different processor series according chip socket support. The E7-2800 series is geared for two-socket systems, while the E7-4800 series is designed for machines with four CPUs. The quad-socket setup is probably the sweet spot for the E7 family given that four CPUs in one server is apt to be less expensive than 2 dual-socket boxes; plus you have twice the memory headroom. The E7-8800 series is for eight socket machines. These CPUs priced at a premium, but if you’re looking for an x86 SMP machine with up to 80 cores (160 threads) and multiple terabytes of memory, this is the CPU for you.

At launch, 19 server makers announced E7-based platforms, including the usual suspects like IBM, HP, Dell, Cisco, and Oracle. The principle destination for these chips will be “mission-critical” enterprise servers, the segment Intel first pursued in a major with its Nehalem EX line. To chase that application space, Intel has incorporated a number of new security and RAS features which, according to them, puts their latest x86 offering on par with RISC CPUs and even their own Itanium chip. Mission-critical enterprise computing is estimated to be worth about $18 billion per year — about twice that of the HPC server market.

But a number of vendors — SGI, Cray, Supermicro, and AMAX, thus far — are also using the E7s to build scaled-up HPC machinery. SGI for example, has latched onto the E7s to refresh their Altix UV shared memory products. The low-end Altix UV 10 and mid-range Altix UV 100 both benefitted from the extra cores and memory capacity.

For example, the UV 100 now scales to 960 cores and 12 TB of shared memory in just two racks. The top-of-the-line Altix UV 1000 can also use the new E7 CPUs, but for architectural reasons and OS limitations still tops out at 2,048 cores and 16 TB of memory. However, you can still take advantage of the more performant 8-core and 10-core E7s, so a UV 1000 can squeeze out more FLOPS per watt than before, and can scale past 20 teraflops of peak performance.

Cray’s CX1000-S is also being offered with E7 chips. Although, Cray didn’t announce specific configurations, as in the Altix UV, the higher performing E7s would make this SMP box faster and/or more power efficient.

Finally, both Supermicro and AMAX have come up with four-socket and eight-socket E7-based servers (these might actually be the same hardware). The top-end offerings delivers up to 80 cores and 2 TB of memory in an 5U form factor, while the four-socket servers provide half that scalability, but in a 1U, 2U, or 4U package. The 8-way offerings can be outfitted with up to four NVIDIA GPUs if you want to pair the E7 parts with some extra vector acceleration. Although these Supermicro and AMAX systems are geared for HPC, at least the non-GPU versions are also being positioned for big memory enterprise workloads.

As you can see from the chart above, these high-end CPUs are priced accordingly. The top-end 130-watt E7-8870 is over $4,600 in quantities of a thousand. More mid-range E7s will run half that, and even the 10-core chip for dual-socket systems runs over $2,500. Intel apparently believes that they are worth the premium, and given that these chip are being paired with lots of expensive DRAM and software, the CPU itself is probably the one of the best-valued components in these high-end shared memory servers.

Regardless, the E7 parts will be less expensive than RISC processors, the Itanium, or any proprietary CPU. At the other end of the price spectrum, Intel will have to contend with AMD, which is planning to launch its Bulldozer-class “Interlagos” CPU in Q3. Those chips come in 12-core and 16-core versions and can populate four-socket servers. So for users with SMP workloads that are chewing on terabytes of data, the x86 architecture is looking a bit more tempting.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

2022 Road Trip: NASA Ames Takes Off

November 25, 2022

I left Dallas very early Friday morning after the conclusion of SC22. I had a race with the devil to get from Dallas to Mountain View, Calif., by Sunday. According to Google Maps, this 1,957 mile jaunt would be the longe Read more…

2022 Road Trip: Sandia Brain Trust Sounds Off

November 24, 2022

As the 2022 Great American Supercomputing Road Trip carries on, it’s Sandia’s turn. It was a bright sunny day when I rolled into Albuquerque after a high-speed run from Los Alamos National Laboratory. My interview su Read more…

2022 HPC Road Trip: Los Alamos

November 23, 2022

With SC22 in the rearview mirror, it’s time to get back to the 2022 Great American Supercomputing Road Trip. To refresh everyone’s memory, I jumped in the car on November 3rd and headed towards SC22 in Dallas, stoppi Read more…

Chipmakers Looking at New Architecture to Drive Computing Ahead

November 23, 2022

The ability to scale current computing designs is reaching a breaking point, and chipmakers such as Intel, Qualcomm and AMD are putting their brains together on an alternate architecture to push computing forward. The chipmakers are coalescing around the new concept of sparse computing, which involves bringing computing to data... Read more…

QuEra’s Quest: Build a Flexible Neutral Atom-based Quantum Computer

November 23, 2022

Last month, QuEra Computing began providing access to its 256-qubit, neutral atom-based quantum system, Aquila, from Amazon Braket. Founded in 2018, and built on technology developed at Harvard and MIT, QuEra, is one of Read more…

AWS Solution Channel

Shutterstock 1648511269

Avoid overspending with AWS Batch using a serverless cost guardian monitoring architecture

Pay-as-you-go resources are a compelling but daunting concept for budget conscious research customers. Uncertainty of cloud costs is a barrier-to-entry for most, and having near real-time cost visibility is critical. Read more…

 

shutterstock_1431394361

AI and the need for purpose-built cloud infrastructure

Modern AI solutions augment human understanding, preferences, intent, and even spoken language. AI improves our knowledge and understanding by delivering faster, more informed insights that fuel transformation beyond anything previously imagined. Read more…

SC22’s ‘HPC Accelerates’ Plenary Stresses Need for Collaboration

November 21, 2022

Every year, SC has a theme. For SC22 – held last week in Dallas – it was “HPC Accelerates”: a theme that conference chair Candace Culhane said reflected “how supercomputing is continuously changing the world by Read more…

Chipmakers Looking at New Architecture to Drive Computing Ahead

November 23, 2022

The ability to scale current computing designs is reaching a breaking point, and chipmakers such as Intel, Qualcomm and AMD are putting their brains together on an alternate architecture to push computing forward. The chipmakers are coalescing around the new concept of sparse computing, which involves bringing computing to data... Read more…

QuEra’s Quest: Build a Flexible Neutral Atom-based Quantum Computer

November 23, 2022

Last month, QuEra Computing began providing access to its 256-qubit, neutral atom-based quantum system, Aquila, from Amazon Braket. Founded in 2018, and built o Read more…

SC22’s ‘HPC Accelerates’ Plenary Stresses Need for Collaboration

November 21, 2022

Every year, SC has a theme. For SC22 – held last week in Dallas – it was “HPC Accelerates”: a theme that conference chair Candace Culhane said reflected Read more…

Quantum – Are We There (or Close) Yet? No, Says the Panel

November 19, 2022

For all of its politeness, a fascinating panel on the last day of SC22 – Quantum Computing: A Future for HPC Acceleration? – mostly served to illustrate the Read more…

RISC-V Is Far from Being an Alternative to x86 and Arm in HPC

November 18, 2022

One of the original RISC-V designers this week boldly predicted that the open architecture will surpass rival chip architectures in performance. "The prediction is two or three years we'll be surpassing your architectures and available performance with... Read more…

Gordon Bell Special Prize Goes to LLM-Based Covid Variant Prediction

November 17, 2022

For three years running, ACM has awarded not only its long-standing Gordon Bell Prize (read more about this year’s winner here!) but also its Gordon Bell Spec Read more…

2022 Gordon Bell Prize Goes to Plasma Accelerator Research

November 17, 2022

At the awards ceremony at SC22 in Dallas today, ACM awarded the 2022 ACM Gordon Bell Prize to a team of researchers who used four major supercomputers – inclu Read more…

Gordon Bell Nominee Used LLMs, HPC, Cerebras CS-2 to Predict Covid Variants

November 17, 2022

Large language models (LLMs) have taken the tech world by storm over the past couple of years, dominating headlines with their ability to generate convincing hu Read more…

Nvidia Shuts Out RISC-V Software Support for GPUs 

September 23, 2022

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

RISC-V Is Far from Being an Alternative to x86 and Arm in HPC

November 18, 2022

One of the original RISC-V designers this week boldly predicted that the open architecture will surpass rival chip architectures in performance. "The prediction is two or three years we'll be surpassing your architectures and available performance with... Read more…

AWS Takes the Short and Long View of Quantum Computing

August 30, 2022

It is perhaps not surprising that the big cloud providers – a poor term really – have jumped into quantum computing. Amazon, Microsoft Azure, Google, and th Read more…

Chinese Startup Biren Details BR100 GPU

August 22, 2022

Amid the high-performance GPU turf tussle between AMD and Nvidia (and soon, Intel), a new, China-based player is emerging: Biren Technology, founded in 2019 and headquartered in Shanghai. At Hot Chips 34, Biren co-founder and president Lingjie Xu and Biren CTO Mike Hong took the (virtual) stage to detail the company’s inaugural product: the Biren BR100 general-purpose GPU (GPGPU). “It is my honor to present... Read more…

Tesla Bulks Up Its GPU-Powered AI Super – Is Dojo Next?

August 16, 2022

Tesla has revealed that its biggest in-house AI supercomputer – which we wrote about last year – now has a total of 7,360 A100 GPUs, a nearly 28 percent uplift from its previous total of 5,760 GPUs. That’s enough GPU oomph for a top seven spot on the Top500, although the tech company best known for its electric vehicles has not publicly benchmarked the system. If it had, it would... Read more…

AMD Thrives in Servers amid Intel Restructuring, Layoffs

November 12, 2022

Chipmakers regularly indulge in a game of brinkmanship, with an example being Intel and AMD trying to upstage one another with server chip launches this week. But each of those companies are in different positions, with AMD playing its traditional role of a scrappy underdog trying to unseat the behemoth Intel... Read more…

JPMorgan Chase Bets Big on Quantum Computing

October 12, 2022

Most talk about quantum computing today, at least in HPC circles, focuses on advancing technology and the hurdles that remain. There are plenty of the latter. F Read more…

UCIe Consortium Incorporates, Nvidia and Alibaba Round Out Board

August 2, 2022

The Universal Chiplet Interconnect Express (UCIe) consortium is moving ahead with its effort to standardize a universal interconnect at the package level. The c Read more…

Leading Solution Providers

Contributors

Using Exascale Supercomputers to Make Clean Fusion Energy Possible

September 2, 2022

Fusion, the nuclear reaction that powers the Sun and the stars, has incredible potential as a source of safe, carbon-free and essentially limitless energy. But Read more…

Nvidia, Qualcomm Shine in MLPerf Inference; Intel’s Sapphire Rapids Makes an Appearance.

September 8, 2022

The steady maturation of MLCommons/MLPerf as an AI benchmarking tool was apparent in today’s release of MLPerf v2.1 Inference results. Twenty-one organization Read more…

Not Just Cash for Chips – The New Chips and Science Act Boosts NSF, DOE, NIST

August 3, 2022

After two-plus years of contentious debate, several different names, and final passage by the House (243-187) and Senate (64-33) last week, the Chips and Science Act will soon become law. Besides the $54.2 billion provided to boost US-based chip manufacturing, the act reshapes US science policy in meaningful ways. NSF’s proposed budget... Read more…

SC22 Unveils ACM Gordon Bell Prize Finalists

August 12, 2022

Courtesy of the schedule for the SC22 conference, we now have our first glimpse at the finalists for this year’s coveted Gordon Bell Prize. The Gordon Bell Pr Read more…

Intel Is Opening up Its Chip Factories to Academia

October 6, 2022

Intel is opening up its fabs for academic institutions so researchers can get their hands on physical versions of its chips, with the end goal of boosting semic Read more…

AMD Previews 400 Gig Adaptive SmartNIC SOC at Hot Chips

August 24, 2022

Fresh from finalizing its acquisitions of FPGA provider Xilinx (Feb. 2022) and DPU provider Pensando (May 2022) ), AMD previewed what it calls a 400 Gig Adaptive smartNIC SOC yesterday at Hot Chips. It is another contender in the increasingly crowded and blurry smartNIC/DPU space where distinguishing between the two isn’t always easy. The motivation for these device types... Read more…

Google Program to Free Chips Boosts University Semiconductor Design

August 11, 2022

A Google-led program to design and manufacture chips for free is becoming popular among researchers and computer enthusiasts. The search giant's open silicon program is providing the tools for anyone to design chips, which then get manufactured. Google foots the entire bill, from a chip's conception to delivery of the final product in a user's hand. Google's... Read more…

AMD’s Genoa CPUs Offer Up to 96 5nm Cores Across 12 Chiplets

November 10, 2022

AMD’s fourth-generation Epyc processor line has arrived, starting with the “general-purpose” architecture, called “Genoa,” the successor to third-gen Eypc Milan, which debuted in March of last year. At a launch event held today in San Francisco, AMD announced the general availability of the latest Epyc CPUs with up to 96 TSMC 5nm Zen 4 cores... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire