Visit additional Tabor Communication Publications
September 16, 2008
When Intel and Cray became sweethearts back in April, I never imagined the first offspring from that relationship would be a personal supercomputer. But that's what happened. Today, Cray announced its first ever deskside supercomputer, the CX1.
The system is a mini-cluster of up to eight blades and sports Intel's latest dual- and quad-core Xeon chips. The blades can be compute, visualization or storage nodes and can be mixed and matched according to need. If maximum storage is desired, up to 4 terabytes can be stuffed into a single enclosure. In its most compute-heavy configuration, a single CX1 chassis contains 64 cores, for a peak performance of around 780 gigaflops. (With an upgrade path to Nehalem processors promised, a teraflop deskside system is a sure bet.) Blade nodes are linked via Ethernet or InfiniBand, and up to three CX1 chassis can be hooked together without any additional switches.
The visualization node consists of an NVIDIA Quadro FX card and some supporting Xeons. It essentially represents a workstation within a cluster. There was no specific talk about NVIDIA HPC compatibility, such as using a Tesla C1060 GPU computing card in the enclosure, but presumably even the Quadro card could be used for a CUDA-accelerated applications. If so, users could tap hundreds of more gigaflops for computing.
The CX1 can be configured and ordered online, as you would do for a PC. A minimally configured box runs about $25K, while a fully tricked out CX1 would cost closer to $80K. Software can be pre-loaded, although I didn't see any way to specify applications or cluster management tools with the online form.
Now for the second surprise. Linux-loving Cray is offering Microsoft's new Windows HPC Server 2008 on the CX1. "Offering" is actually a bit of a misrepresentation. Windows appears to be the default cluster OS for the CX1, although RedHat Linux is also available. On Tuesday's press conference for the CX1 announcement, Microsoft Technical Fellow Burton Smith (and former Cray Chief Scientist) extolled the virtues of the new relationship, noting: "This is a very significant day for us." Kyril Faenov, general manager of Microsoft's HPC business unit was also on hand to talk up their new HPC Server, due to be officially rolled out next week.
For Microsoft, the new relationship with Cray could be a watershed event in the HPC community. To have the iconic supercomputing company cozying up to the iconic PC software firm will likely cause some rending of garments by long-time Cray fans, who view Microsoft as the devil incarnate. But the new realities of the HPC market were bound to drive these two companies together, not to mention that they only live about five miles from each other.
It may seem contradictory that a company that prides itself on its terascale (soon to be petascale) supercomputers has chosen to make a play at gigascale. But the HPC landscape is such that the low end of the market is where much of the action is and, conversely, where there appears to be a lot of untapped potential. Inhibiting this part of the market is the difficulty users encounter in jumping from workstations to HPC clusters, since software applications are often not portable across this gap. Cost and lack of system administration expertise are additional roadblocks to wider use of low-end HPC. By offering standard software from ISVs on standard operating systems and standard hardware, Cray hopes to make that transition much smoother.
At the same time, there are plenty of current HPC users who realize that as processor horsepower continues to rise, smaller systems are catching up with their applications. A single CX1 chassis would have made it onto the TOP500 list just four years ago. Add that to the fact that a personal HPC system is much more appealing than sharing a larger system, even if the larger one is somewhat more powerful. For example, if you have to wait 12 hours to run a 10 minute job that would run in 30 minutes on a personal machine, the slower system ends up being a lot faster.
The new Cray venture is not without its risks. So far, commercial success has eluded personal supercomputing. Over the past few years, Orion Multisystems and Tyan both offered deskside HPC machines. Orion went belly up in 2006, and last year Tyan's personal supercomputer offering was spun off to a separate division that quietly faded away. But more powerful processors, faster interconnects, and more mature system and application software may mean the time is now right for such a product. And with Cray, Microsoft and Intel standing behind it, personal supercomputing may have found a winning combination.
(For more on this story, read our Q&A with Ian Miller, Cray's senior vice president of sales and marketing.)
Posted by Michael Feldman - September 15, 2008 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.