Visit additional Tabor Communication Publications
March 31, 2009
On Monday Intel launched its much-anticipated Nehalem dual-socket server chip, the Xeon 5500 processor series. With the inclusion of a number of architectural features, some of which were copied from AMD, the new Xeons represent the most significant redirection of Intel's x86 server product line in more than a decade and the largest performance increase in Xeon's history.
Intel Senior Vice President and General Manager of the Digital Enterprise Group Pat Gelsinger orchestrated a Nehalem sales pitch to a live and remote webcast audience on Monday in San Francisco, where he characterized the new 45nm chip as "the most important server launch since the Pentium Pro" -- the company's initial server processor, introduced in 1996.
As most of us know by now, the new Xeons will be the first Intel server chips to include an integrated memory controller (IMC). According to Intel, the three-channel IMC will deliver more than triple the memory bandwidth of the older Xeon 5400 chips (which relied on a discrete memory controller). On the STREAM benchmark, the company shows a 3.63-fold increase over the previous generation. Because Intel has jumped to DDR3 memory, capacity gets a big boost too. A maxed out system with 18 DIMM slots yields a whopping 144 GB of memory.
The new QuickPath Interconnect (QPI) is the other big architectural enhancement on the Nehalems, replacing the older Front-Side Bus technology. QPI is a point-to-point processor communication link that represents Intel's version of AMD's HyperTransport interconnect, and, like its rival, allows for an elegant way to do NUMA on a multiprocessor platform. The QPI implementation on the Xeon 5500 will deliver up to 25.6 GB/second per link.
The Nehalem generation also brings back simultaneous multithreading -- now called hyper-threading -- to the Intel product line. This allows each of the four processor cores to manage two virtual threads. On a two-socket Nehalem server or workstation, that translates into 16 threads per machine.
Another performance enhancing goodie is the so-called "Turbo Boost Technology," which can temporarily rev up clock frequencies to take advantage of variations in compute loads. The clock boost can take place at the level of individual cores or across the entire processor, allowing threads to execute faster when power and thermal conditions allow. Although this might seem like a feature geared more toward parallel applications that have trouble fully utilizing available cores, one example Gelsinger demonstrated for this technology was a highly-parallel Black-Scholes financial application workload.
The result is that at certain times, some threads will be able to run faster than the nominal clock speed of the processor. But the magnitude of the speed up is only a fraction of what the application would attain if it fully utilized all cores. In other words, the chip can't run a thread three times faster just because three cores are idle. The clock speed bumps come in 133 MHz chunks until the chip's maximum power and thermal threshold is reached.
Even the nominal clock speeds are pretty impressive though. The Xeon 5500 parts targeted for HPC machines include the X5570 (2.93 GHz), X5560 (2.80 GHz) and X5550 (2.55 GHz). Average maximum power consumption for all three chips is 95 watts, but Intel has incorporated power gating and other technology to reduce energy draw when the compute load drops and Turbo Boost maxes out. In particular, the new Xeons use 50 percent less energy at idle than their 5400 forbearers. Intel has also invented something called Node Manager, which enables the IT department to set energy consumption policy at the level of the platform.
All of this adds up to a lot better application performance on high performance codes. For example, on an LS-Dyna crash simulation, Intel has demonstrated a 2.02X speed up compared to the Xeon 5400, while the corresponding speed up for a Fluent CFD code is 2.20X.
According to Gelsinger, the earliest shipping segment for the new Xeons is the HPC segment, where they will be used across the application spectrum from scientific research to weather modeling and engineering simulations. At the highest end of the market, the 5500 will be deployed in multiple petaflop-class machines in 2009, including NASA's Pleiades supercomputer and the Canadian Scinet machine -- part of the CERN Hadron Collider project. Nehalem-based supers are sure to make their presence felt on upcoming TOP500 lists.
"In 1993 the fastest high performance computer was about 90 gigaflops," recalled Gelsinger. "Today that's a single high performance 5500 dual-processor workstation."
Although a number of OEMs have been previewing Nehalem-based workstations and servers for weeks, the official release of the microprocessor on Monday precipitated dozens of announcements from system vendors large and small. In the HPC arena, OEMs such as IBM, HP, Dell, Sun Microsystems, SGI, Cray, Appro, and Penguin Computing, among others, have all launched new systems or product upgrades based on Nehalem parts.
The big question, of course, is if anyone will buy new Nehalem-equipped gear in an economic climate where IT budgets are being squeezed. Unlike Shanghai, AMD's latest 45nm quad-core server chip that is socket compatible with the previous generation of Opterons, Nehalem is no drop-in upgrade. If users want to tap into Intel's new wonder chip, they'll have to purchase new machines, and they won't be particularly cheap.
Intel is trying to make the case that the new silicon is so compelling, that from either a performance or energy efficiency point of view, an upgrade is a no-brainer, even -- or especially -- as IT bean counters try to make ends meet. According to Intel, if a datacenter still has 2005-era, single-core Xeon gear, a footprint swap with Nehalem machines will yield a 9X performance improvement, while saving 20 percent in energy costs. Alternatively, if the datacenter just needs to maintain the same performance, the company claims consolidating servers at a 9:1 ratio would pay for itself in eight months, largely due to a 90 percent reduction in energy costs.
There has been talk that the Nehalem may be the catalyst needed to pry open IT budgets. It's been estimated that the current-standing IT infrastructure in datacenters consists of 40 percent single-core machines and 40 percent dual-core. If so, there should be plenty of pent-up demand for more powerful, energy-efficient servers. Gelsinger maintains that although the server market is suffering under the weight of the recession, it's relatively healthy compared to the rest of the IT market. "We continue to see that there is strength in some segments within servers," he added, "such as the big datacenters as well as in HPC."
Intel is trying to convince customers to take the Nehalem plunge this year, all but promising that the next generation of "Westmere" 32nm processors will be a drop-in upgrade. The server versions of those chips will sport six cores and are scheduled to be available sometime in 2010. For those looking for an even bigger core count, the eight-core Nehalem EX chips are expected to arrive at the end of 2009 or the beginning of 2010. And for truly big shared memory applications, ScaleMP offers technology today that can aggregate up to 32 Nehalem sockets, yielding a virtual SMP with 128 cores and 4 terabytes of memory.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.