Visit additional Tabor Communication Publications
May 08, 2008
On Wednesday, AMD presented its revised server processor plans for the next couple of years. The roadmap included the upcoming 45nm Shanghai chip, new six- and twelve-core Opteron processors, and the next-generation socket for DDR3 and PCIe Gen 2. AMD's new path also gives us some idea why Cray decided to play nice with Intel.
During the briefing, Randy Allen, general manager of AMD's server and workstation division, said the company's plans to move from 65nm to 45nm are on track. According to him, the first 45nm server processor, "Shanghai," is scheduled to be delivered by the end of the year. Shanghai is a shrink of the 65nm quad-core Barcelona. The smaller componentry created enough extra silicon real estate for the chip designers to increase L2 cache from 256 MB to 512 MB and L3 cache from 2 MB to 6 MB. Besides more cache, AMD says Shanghai will get a 10 percent boost in memory performance and include support for HyperTransport 3.0. The good news for OEMs is that the new part drops into existing Barcelona (Rev F) sockets with just a BIOS upgrade.
At this point the roadmap heads off into new territory. Apparently plans for the once talked about eight-core "Sandtiger" Opteron have been scrapped. Instead, AMD will develop "Istanbul," a six-core Opteron that is pin-compatible with the current Rev F sockets. The chip is planned to be introduced in the second half of 2009.
Note that even if Shanghai and Istanbul hit their dates, they're going to be about a year behind their Xeon counterparts. Intel delivered its 45 nm quad-core "Harpertown" chips back in November 2007 and plans to ship the six-core "Dunnington" in the second half of this year.
Further out, AMD has a chance to catch Intel with the third generation Opteron processors. Two new offerings, the six-core "Sao Paolo" and the twelve-core "Magny Cours," will be developed for a new platform -- what the company's calling Socket G43. Taking a cue from Intel's multipackaging approach, the twelve-core Magny Cours is actually two six-core chips jammed into the same socket. The misspelled Sao Paolo (São Paulo) chip seems jinxed already. (AMD continues to name its silicon after Formula One venues.) Spelling error aside, the platform has lots of new goodies that should really rev throughput, including DDR3 memory support, four HyperTransport 3.0 links and PCI Express 2.0 support. The third generation Opterons are due out in the first half of 2010.
Toward the end of the briefing, Allen mentioned "Suzuka," a single-socket Opteron built on the 45nm process. The Suzuka is expected to be available in the second quarter of 2009. Like the 65nm Budapest before it (which is just now becoming available), Suzuka is aimed at workstations and, coincidentally, Cray supercomputers. I say coincidentally because the new Suzuka will emphasize higher clock speeds, which is great for single-threaded workstation applications, but less great for highly parallel supercomputer workloads. At this point, AMD has no plans to move beyond four cores for single-socket platforms, a fact I'm sure Cray had knowledge of well before it became public this week.
Compared to Intel with its upcoming dynamically-scalable Nehalem processors and 80-core teraflop experiments, AMD has taken a more conservative approach to the server processor space. Because of the company's recent financial hardships, AMD needs to aim directly at the market's sweet spot and not worry about developing bleeding-edge many-core processors or other exotic silicon for the uber-HPC crowd. With the recent loss of CTO Phil Hester, the Accelerated Processing Initiative that merges CPUs and GPUs on-chip might get shoved to the back burner (although this was by no means an HPC-only project). If AMD can deliver the hardware it has staked out on its new roadmap at prices that undercut comparable Intel parts and that deliver competitive performance per watt numbers, the company will continue to maintain a following with HPC system builders. Not that Intel is going to make that easy.
Posted by Michael Feldman - May 07, 2008 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.