Visit additional Tabor Communication Publications
April 29, 2008
The Intel-Everywhere narrative got a boost on Monday when Cray and Intel announced a multi-year deal intended to create advanced supercomputing technology. The alliance is designed to boost Cray's prospects in the supercomputing arena and accelerate Intel's HPC mindshare and technology strength in the high-end processor market. The first Intel-based Cray machines are expected to be introduced in 2011 or 2012.
Over the past couple of years Intel has been making a big push to get back into the HPC space. The company has aggressively evolved its Xeon architecture and managed to get its processors into a dominant position on the TOP500 list (354 out of 500 positions and 3 of the top 5). What they're looking for now is to eclipse AMD and IBM in the most elite supercomputing systems. By teaming up with AMD-loving Cray for what looks like a deep and sustained collaboration, Intel has achieved another milestone along that path. The rationale behind the integrated R&D effort is to bring Intel technology into Cray supercomputers and Cray supercomputing smarts into Intel's microprocessors and software tools.
Cray will continue to support and develop its AMD Opteron-based XT product line for the foreseeable future, but like practically every other OEM in the HPC space, Cray is going to let the market decide what its x86 product mix will look like down the road.
Since Cray operates at the bleeding edge of the HPC space, the company needs to be competitive with IBM supercomputing and maintain some distance from HPC cluster makers like Sun Microsystems, HP, SGI, and others. With that in mind, Cray came to the conclusion that it couldn't afford to ignore Intel any longer. With the ascendency of Intel technology over the last couple of years, and with AMD focused on regaining profitability via its lower end volume processor offerings, Cray decided now was the time to broaden its x86 horizons.
"If you look at the worst times for Intel in HPC two years ago, we were losing share on the TOP500 list and we had, in many cases, uncompetitive performance on some of the key workloads," admits Kirk Skaugen, general manager of Intel's Server Platforms Group." Now, he says, with Intel's headstart on the 45nm process node and the company's aggressive tick-tock development cycle, things have turned around. With the upcoming Nehalem processor family, the new QuickPath Interconnect (QPI) technology, and a refocused HPC division, Skaugen thinks they have the best momentum they've ever experienced in high performance computing.
A key part of the agreement announced on Monday includes the licensing of Intel's new QuickPath Interconnect technology to Cray. QPI is a critical enabler for Cray. Like AMD's HyperTransport interconnect, QPI will enable the supercomputer maker to integrate its high performance system fabric (SeaStar or its successor) with IA processors. Intel plans to introduce QuickPath later this year on its upcoming Tukwila (Itanium) and Nehalem (Xeon) processors.
Probably the biggest factor that drove the two companies together was their shared vision of using manycore technology to move system performance beyond a simple Moore's Law trajectory. Due to power constraints, manycore seems like the only feasible way to get to tens of petaflops in the next few years.
While the first petaflop system is expected by late 2008, some applications will need a lot more than that. For example, NASA estimates that it needs a million times the computing power it has today to accurately predict hurricane behavior a couple of weeks in advance. And software for designing personalized pharmaceutical drugs needs something in the neighborhood of an exaflop of computing performance.
"The entire TOP500 list today is about 7 petaflops," said Skaugen. "We think we'll pass that with a single machine in the 2012 timeframe." That coincides with the Cray's timeline for its "Cascade" multi-petaflop supercomputers, which is where the first Intel processors are expected to show up.
Intel is not talking about any specific microprocessor product line for the Cray systems, since it's not on the chipmaker's public roadmap yet. But it's likely to be an Intel Architecture (IA) processor with a lot more than eight cores -- something equivalent to a manycore Xeon. Since the company has already demonstrated a non-IA 80-core chip that achieves over one teraflop, it wouldn't be too much of a stretch to think it has plans for a commercial manycore Xeon on the drawing board, at say 22nm. Intel is also looking to extend the work it's done with its multithreading tools and compiler technology into a manycore framework. While all this technology will be directly applicable to Cray supercomputing, Intel is also looking to apply the resulting products to lower end HPC and mainstream enterprise systems.
With the new emphasis on productivity, rather than just raw performance, Cray CEO Peter Ungaro believes this is good timing for the partnership. It gives Cray access to Intel's expertise with multithreaded-aware software tools, and it provides Intel access to advanced system software and a supercomputing programming model. The Cray CEO sees the collaboration as a great opportunity to make a real advancement in productivity.
Engineers from the two companies are already working together and, according to Ungaro, customers are upbeat about the new relationship. Even though Cray's adaptive computing roadmap is not going to fundamentally change, some of the underlying technology and products will certainly be new. For now, Cray is sharing more specific plans with just a few select customers. But Ungaro believes the most significant aspect of the new collaboration is not that they'll be selling systems with Intel processors, but that the two companies can synergize their unique strengths to attack the most persistent problems in supercomputing: scalability and programmability.
"At the high level, I think it's a really compelling partnership between two industry leaders," says Ungaro. "By putting our respective technical strengths together, I think we're really going to have the opportunity to dramatically advance the future of HPC systems, and ultimately change the landscape of the supercomputing industry."
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.