Visit additional Tabor Communication Publications
May 28, 2009
As we reported yesterday, the five venture capital firms supporting Linux cluster vendor SiCortex have pulled their funding, forcing the Massachusetts-based company to shut down its operations. As the company prepares to sell off its assets, there are said to be only a handful of employees remaining. Unless a buyer comes in that is willing to take over the business more or less intact, support for dozens of SiCortex systems currently deployed at user sites will come to an abrupt end.
As of today, the company had 37 customers listed on their Web site, including such big names as Argonne National Lab, MIT, NASA, Karlsruhe Institute of Technology, Chevron, and General Electric. More than 20 universities had also purchased SiCortex gear. Undoubtedly, some of these customers will be able to migrate their HPC applications onto spare capacity as they phase out their orphaned SiCortex machines. For others though, the transition is going to be a very painful one.
I spoke with a university customer who had purchased a mid-range SiCortex cluster, where it represents the institution's newest HPC platform. The system administrator there, who requested to remain anonymous, was wondering what they were going to do without vendor support. "This really puts us in a huge bind," he told me.
According to him, the university shelled out around $150,000 for the SiCortex cluster, the largest investment ever made at this particular lab. They had planned for the system to be their HPC platform of the future, where it was supposed to augment the lab's aging x86-based clusters. SiCortex was chosen because of space and power constraints at the datacenter, and because university researchers there were working on computational models that scaled extremely well for high core-count architectures. Compared to conventional x86 clusters, SiCortex systems use greater numbers of less powerful MIPS cores to deliver some of the best performance per watt metrics in the industry.
The SiCortex design has garnered a lot of critical acclaim, but according to my system admin source, it hasn't been exactly smooth sailing with the company's hardware. He said while the SiCortex's support and service has been "outstanding," the machines themselves have had their problems. The first system purchased by the university, the SC648, didn't live up to its advertised performance. The company ended up swapping the system with the more powerful SC1458 machine. However, even this system has been trouble. According to the system administrator, during the 16 to 18 months the system has been running, they have not gone for a consecutive four month span without some kind of hardware failure.
Currently two of the 235 compute nodes on the system are down. While that is probably not an outlandish failure rate considering the size of the cluster, without vendor support, the lifetime of such a machine will be greatly reduced. The real fear is that a critical failure could occur at any time, rendering the system totally worthless.
One unfortunate consequence of SiCortex's demise is that, for awhile at least, there will be no high core-count HPC platform in the price range of a mid-sized cluster, and with the performance efficiency of an IBM Blue Gene. Sometime in the first half of 2010, you should be able to get an 8-socket, 8-core Nehalem-EX server that supports 128 threads. A dozen of these servers would provide over 1,500 threads, more or less equivalent to a 1,458-core SiCortex machine, at least thread-wise. But the price and power consumption (not to mention performance) of a Nehalem cluster of this size are likely to be a good deal more than the equivalent SiCortex system.
The broader tragedy is that the company appeared headed for commercial success. According to my last conversation with the company in April, the business had achieved record growth in Q1, and had collected a big pipeline of customers for the remainder of 2009. SiCortex was already pitching its more powerful next-generation systems (more cores and faster processors). If the VCs hadn't blinked, the company may have begun to turn a profit this year or next, despite the economic downturn.
The result of SiCortex's demise is that HPC users will likely become even more conservative in their choice of cluster vendors. If they can. For the university in this story, there are no easy answers since the funds may not be there to replace the SC1458 system anytime soon. Even if they were, the researchers were getting used to running their apps on a high-core count machine. "We'll figure something out," said the system admin. "But in the meantime, my scientists are freaking."
Posted by Michael Feldman - May 28, 2009 @ 3:28 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.