Visit additional Tabor Communication Publications
September 09, 2011
This week chipmaker AMD announced the first shipments of its 16-core Interlagos CPUs, based on the company's latest "Bulldozer" architecture. Interlagos, whose official series designation is Opteron 6200, is AMD's high-end server offering that will soon be duking it out with Intel's upcoming Sandy Bridge Xeons.
The Interlagos CPU, as you may recall, comes in 12- or 16-core flavors, has a quad-channel memory controller, up to 16 MB of L3 cache, and is compatible with AMD's Opteron 6000 G34 socket. The chips are being produced by GlobalFoundries, AMD's fab spinoff, with its 32nm SOI process technology.
Interlagos will be AMD's first implementation of the Bulldozer architecture, which will span both server and high-end client products in the company's portfolio. The major design change from previous AMD CPUs is the introduction of a module-type structure in which two integer cores share a single floating point core, the idea being to increase thread performance and optimize chip real estate.
In a press release on Wednesday, AMD said it had begun production of the Interlagos parts last month and shipping has begun. According to the company, many of the initial shipments will end up in "large custom supercomputer installations that are now underway." Since Cray is the only vendor that makes custom supers with AMD CPUs, one can assume they are talking about those machines.
For some reason, all these initial Interlagos-based supers are in Europe, which means AMD will have a nice little Opteron export business going for the rest of the year.
For example, the Swiss National Supercomputing Centre (CSCS) is going to upgrade its XT5 “Monte Rosa” system to an XE6 by installing the Gemini interconnect and swapping out the older Opterons for Interlagos parts. The system upgrade is scheduled to be completed before the end of the year and should double system performance to about 400 peak teraflops, making it Switzerland's most powerful supercomputer.
HECToR (High-End Computing Terascale Resource), the UK's national supercomputing service, is also in line for an Interlagos refresh before the end of 2011. With the new CPUs, that system is expected to top out at 827 teraflops, which just about pushes the machine's terascale designation to the limit.
Cray's first GPU-accelerated supercomputer, the upcoming XK6 supercomputer, will also incorporate Interlagos CPUs, hooking up an X2090 Tesla module to each Interlagos chip. The first XK6 system installation, at least the first one publicly announced, will be deployed in Switzerland at the aforementioned CSCS. That system, name named "Piz Palu," will be upgraded from its current XE6m configuration to an XK6 using the Interlagos parts and the new NVIDIA Tesla modules.
Finally, the High Performance Computing Center Stuttgart (HLRS) at the University of Stuttgart will be deploying a new XE6 supercomputer later this year, and as it turns out, that machine will be Interlagos-based as well. The contract win was announced in 2010 as Cray's first Cascade-class super.
Presumably, Interlagos CPUs will soon be appearing in more vanilla HPC clusters as well, but there they will be competing for market share with the more popular Xeon CPUs from Intel. The 32nm Sandy Bridge Xeons (Sandy Bridge EP, aka Xeon E5) are slated to ship in Q4, and barring some reversal of fortune for Intel, will continue to dominate the x86 cluster market.
The Bulldozer architecture is somewhat of a watershed moment for AMD's high-end CPU business, which has been yielding market share to its larger competitor for several years. Currently AMD's server market share is around 6 to 7 percent, compared to its high water mark of about 25 percent in 2006 when its dual-core Opterons were the hottest chips around. It's share in the HPC space is larger than that -- more than 10 percent by most estimates -- but a nice chunk of that is due to the thousands of Opterons that go into these big custom supercomputers. But Cray's plans to bring Intel Xeons into its Cascade-class supercomputers in the next couple of year could threaten even that.
The good news for AMD at this point is that they hit their Q3 schedule for shipping their new Opterons, and Cray is buying them by the bushel. The Sandy Bridge-Bulldozer battle awaits.
Posted by Michael Feldman - September 09, 2011 @ 1:19 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.