Visit additional Tabor Communication Publications
August 18, 2006
In 1999, while working for IBM, Peter Ungaro sold the DOE's National Energy Research Scientific Computing (NERSC) Center their first IBM SP (Scalable PowerParellel) supercomputer. Up until then, NERSC's supercomputers were supplied by Cray (or CDC). After 1999 all the Cray and CDC machines were slowly phased out. Seven years later, Ungaro, now the CEO of Cray, is enjoying the company's $52 million contract win to deploy Cray's next-generation "Hood" supercomputer at NERSC.
The Hood system is expected to house over 19,000 AMD Opteron 2.6 GHz processor cores, with two cores per socket making up one node. Each node will have 4 gigabytes of memory and a dedicated SeaStar connection to the internal network. The full system will consist of over 100 cabinets with 39 terabytes of aggregate memory capacity.
"We are excited that NERSC will again be home to a large Cray supercomputer," commented Ungaro last week.
For the past few months, Cray seems to be on a roll. In June, they announced the $200 million multi-year deal with the DOE's Oak Ridge National Laboratory (ORNL) to upgrade their supercomputers, culminating in the deployment of a petaflops system in 2008. Prior to this, the company announced its selection as the preferred bidder for the UK's Engineering and Physical Sciences Research Council and also recorded an order from the Swiss National Computing Lab to upgrade their XT3 system. Now with the NERSC win, Cray seems to be hitting its stride.
That NERSC system is scheduled to be deployed at Lawrence Berkeley National Laboratory later this year, with completion of the installation anticipated sometime in the first half of 2007. HPCwire talked with Bill Kramer, general manager of the NERSC Center, about the significance of the Cray deployment and some of the evaluation criteria. Kramer was very focused on application performance during the evaluation process, relying on NERSC's own workload-based benchmark -- the Sustained System Performance Metric (SSP).
"We're not disclosing price/performance results for any of the vendors, but on the performance side we were looking for between 7.5 and 10 sustained teraflops across our SSP benchmarks," said Kramer. "The Cray system proposed was above that range, which we thought was very impressive. Now that we have finished our discussions with Cray, we now expect the system to have an SSP of 16.1 teraflops".
Read the entire Q&A in this week's Features section.
Cray also reported its second quarter earnings earlier this month, posting a loss, but exceeding analysts' expectations. The Motley Fool recently commented on the company's improving fortunes (even before the NERSC win) and offered some investment advice associated with Cray's "lumpy sales." Read the article at http://www.fool.com/News/mft/2006/mft06080925.htm.
Lumpy sales aside, if Cray is selected as a vendor for Phase III of DARPA's HPCS program -- as many expect -- it will cap off a glorious summer for the company.
The Quad Couple
As anticipated, AMD did announce its new Rev F dual-core Opteron processors on Tuesday, calling them the Next-Generation AMD Opteron processor -- a name I assume will be changed before the next "next generation" comes along. The chip offers DDR2 memory and some virtualization features, but the company devoted a lot of their message this week to the upgrade path offered by the new Opterons and the significance of their Torrenza initiative. Torrenza uses the Direct Connect Architecture and HyperTransport technology to provide an open x86 ecosystem for other hardware vendors.
AMD also made a big deal about the new dual-core chips being "electrical-, thermal- and socket-compatible" with the upcoming quad-core Opterons. Why? Because Intel probably will not be able to claim such a feat when they release their quad-core "Clovertown" x86 chip later this year. Since AMD is not expecting to release its own quad-core processor until sometime in the middle of 2007, this probably seemed like a good time to launch a preemptive strike on Clovertown.
Intel's Clovertown has been characterized as a "dual dual-core" rather than a true quad. Basically Intel has packaged two Woodcrest dual-core processors together. It's expected to run hotter than the current Woodcrest chips. Also, the four Intel cores will be sharing the same bus, as opposed to the quad-core AMD chip which will be supported by two HyperTransport sockets. Since neither company has even demonstrated their quad-core offerings yet, it's hard to make performance predictions about them. But in general, AMD appears to be taking the high road, going for performance and scalability, but at a higher cost than Intel's package deal. When both chips are on the market in 2007, comparing them will be just as difficult as comparing the current crop of dual-core offerings.
Which brings us back to the new dual-core Opteron processor. The inclusion of the DDR2 memory in AMD's chip probably tightened the performance race between the Opteron and the latest Xeon offering from Intel. Analysts report that applications that can make use of the large Xeon cache will probably run faster on the Intel chip, while applications that are constrained by main memory bandwidth will tend to perform better on the Opteron. If you're a performance-minded customer, it's probably best to test your actual application on a real system and forget about the company-sponsored benchmarks. We already know how they turned out.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at email@example.com.
Posted by Michael Feldman - August 17, 2006 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.