Visit additional Tabor Communication Publications
September 16, 2008
When Intel and Cray became sweethearts back in April, I never imagined the first offspring from that relationship would be a personal supercomputer. But that's what happened. Today, Cray announced its first ever deskside supercomputer, the CX1.
The system is a mini-cluster of up to eight blades and sports Intel's latest dual- and quad-core Xeon chips. The blades can be compute, visualization or storage nodes and can be mixed and matched according to need. If maximum storage is desired, up to 4 terabytes can be stuffed into a single enclosure. In its most compute-heavy configuration, a single CX1 chassis contains 64 cores, for a peak performance of around 780 gigaflops. (With an upgrade path to Nehalem processors promised, a teraflop deskside system is a sure bet.) Blade nodes are linked via Ethernet or InfiniBand, and up to three CX1 chassis can be hooked together without any additional switches.
The visualization node consists of an NVIDIA Quadro FX card and some supporting Xeons. It essentially represents a workstation within a cluster. There was no specific talk about NVIDIA HPC compatibility, such as using a Tesla C1060 GPU computing card in the enclosure, but presumably even the Quadro card could be used for a CUDA-accelerated applications. If so, users could tap hundreds of more gigaflops for computing.
The CX1 can be configured and ordered online, as you would do for a PC. A minimally configured box runs about $25K, while a fully tricked out CX1 would cost closer to $80K. Software can be pre-loaded, although I didn't see any way to specify applications or cluster management tools with the online form.
Now for the second surprise. Linux-loving Cray is offering Microsoft's new Windows HPC Server 2008 on the CX1. "Offering" is actually a bit of a misrepresentation. Windows appears to be the default cluster OS for the CX1, although RedHat Linux is also available. On Tuesday's press conference for the CX1 announcement, Microsoft Technical Fellow Burton Smith (and former Cray Chief Scientist) extolled the virtues of the new relationship, noting: "This is a very significant day for us." Kyril Faenov, general manager of Microsoft's HPC business unit was also on hand to talk up their new HPC Server, due to be officially rolled out next week.
For Microsoft, the new relationship with Cray could be a watershed event in the HPC community. To have the iconic supercomputing company cozying up to the iconic PC software firm will likely cause some rending of garments by long-time Cray fans, who view Microsoft as the devil incarnate. But the new realities of the HPC market were bound to drive these two companies together, not to mention that they only live about five miles from each other.
It may seem contradictory that a company that prides itself on its terascale (soon to be petascale) supercomputers has chosen to make a play at gigascale. But the HPC landscape is such that the low end of the market is where much of the action is and, conversely, where there appears to be a lot of untapped potential. Inhibiting this part of the market is the difficulty users encounter in jumping from workstations to HPC clusters, since software applications are often not portable across this gap. Cost and lack of system administration expertise are additional roadblocks to wider use of low-end HPC. By offering standard software from ISVs on standard operating systems and standard hardware, Cray hopes to make that transition much smoother.
At the same time, there are plenty of current HPC users who realize that as processor horsepower continues to rise, smaller systems are catching up with their applications. A single CX1 chassis would have made it onto the TOP500 list just four years ago. Add that to the fact that a personal HPC system is much more appealing than sharing a larger system, even if the larger one is somewhat more powerful. For example, if you have to wait 12 hours to run a 10 minute job that would run in 30 minutes on a personal machine, the slower system ends up being a lot faster.
The new Cray venture is not without its risks. So far, commercial success has eluded personal supercomputing. Over the past few years, Orion Multisystems and Tyan both offered deskside HPC machines. Orion went belly up in 2006, and last year Tyan's personal supercomputer offering was spun off to a separate division that quietly faded away. But more powerful processors, faster interconnects, and more mature system and application software may mean the time is now right for such a product. And with Cray, Microsoft and Intel standing behind it, personal supercomputing may have found a winning combination.
(For more on this story, read our Q&A with Ian Miller, Cray's senior vice president of sales and marketing.)
Posted by Michael Feldman - September 15, 2008 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.