Visit additional Tabor Communication Publications
August 31, 2007
In a commentary I wrote earlier this month, I talked about how Intel and AMD are preparing themselves for the looming quad-core war that will begin on September 10, when AMD officially launches its Barcelona Opteron processor. The two chipmakers have been busy jockeying for position by leaking pre-launch information about their upcoming quad-core server products.
On Tuesday, CNET's News.com reported that Intel will announce the availability of its new Xeon MP (multiprocessor) "Tigerton" chips next week, just days before Barcelona's coming out party. Tigerton represents the Core microarchitecture implementation for the Xeon MP line, which is designed to be used in platforms with four or more processors. The new processors are expected to be offered in a range of clock speeds (up to 2.93GHz) and power envelopes (as low as 50 watts). Next week's Tigerton launch appears to be timed to blunt the impact of the Barcelona introduction.
At this point, it's not clear if the initial Barcelona launch will even include an MP version. It's likely that AMD will announce only Opterons for dual-processor servers in September. In that case, the new Tigerton will temporarily represent the only path to a four-processor, quad-core x86 server, although Sun currently offers the equivalent in their eight-processor, dual-core Opteron Sun Fire X4600 server. As it stands today, AMD generally dominates the 4P and above x86 server space with its dual-core Opteron 8000 series processors.
The new Tigerton is part of the "Caneland" platform, in which each processor is directly connected to the chipset using separate links. This should alleviate some of the communication bottlenecks and further improve overall performance compared with Intel's current Xeon MP offering, which is based on the older Netburst architecture. The Opterons, with their HyperTransport links and integrated memory controller, should still retain a performance advantage in applications where main memory access or inter-processor communication is the dominant bottleneck.
Next year however, Intel plans to introduce its Common System Interface, or CSI, and on-chip memory controllers, in an effort to bring its architecture in line with the Opteron computing style. This week's feature article talks more about CSI and how it will change the Intel/AMD dynamic.
While the Xeon MP is mostly aimed at high-end x86 servers in the enterprise, traditional HPC users may consider Tigerton-based systems if the price, application performance, and power consumption line up correctly. For the same reasons, system vendors may consider building 4P Tigerton-based workstations specifically aimed at the technical computing market.
It will be interesting to see what Intel's new best buddy, Sun Microsystems, does with the new Xeons and Opterons. Sun will certainly incorporate the quad-core Opterons into its existing AMD-based Sun Fire product line for high performance computing. In fact, Sun doesn't need to invent new Sun Fire offerings specifically for Barcelona. The quad-core Opterons are plug-compatible with the dual-core version, so existing Sun Fire customers can just buy a bag of new chips and have at it. I'm guessing Sun has other plans for the Tigerton. I wouldn't be surprised to see those chips turn up in a 4P Sun Blade 6000 offering. There they may find some duty as a high performance computing platform, but more likely in a mixed workload environment, with traditional enterprise computing mixed with HPC.
If the Tigerton launch fails to put a damper on the Barcelona introduction on September 10, Intel has a second chance at the fall Intel Developer Forum (IDF), which takes place September 18-20. Using the IDF stage, Intel can trot out new performance benchmarks, talk about upcoming whiz-bang technology, and just generally remind everyone who invented the x86.
Meanwhile AMD is already setting up some expectations of what lies beyond the Barcelona offerings. In InformationWeek, Alexander Wolfe writes that AMD is planning to quickly jack up the clock speeds of the quad-core Opterons in Q4. This is not a huge surprise. Because of Intel's new Core architecture and their move to the 45nm process technology later this year, AMD is under a lot of pressure to keep single core performance competitive with its rival. Because Opterons have integrated memory controllers to help with memory performance, AMD doesn't have to match Xeon clock speed hertz for hertz in order to be competitive in overall performance, but they have to get close.
According to Wolfe, the Barcelona launch will include two processors: a standard 2.0GHz, 95 watt version and a low-power 1.9GHz, 68 watt version. He says AMD will introduce faster versions of each before the end of the year. A higher performance 120 watt processor won't ship until the fourth quarter. At that point, it's expected to clock in at 2.3GHz or better. Keep in mind that AMD's Q4 releases will probably be going up against Intel's 45nm Penryn processors, which are expected to be available in the same general timeframe. Information leaked by Intel a couple of weeks ago has the top-of-the line Penryn chip at 3.16GHz.
Beyond 2007, AMD is planning to deliver the 45nm "Shanghai" Opterons (dual and quad) in the second half of 2008. As a nod to Intel, these processors will incorporate more cache -- 512KB of L2 cache per core and 6MB of unified L3 cache. The Shanghai Opterons will presumably be going up against Intel Xeons with the Nehalem microarchitecture, which will support the new CSI links and integrated memory controllers. If that contest comes to pass, it may prove to be the closest matchup between the two chipmakers in a long time.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at email@example.com.
Posted by Michael Feldman - August 30, 2007 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
he study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.