Visit additional Tabor Communication Publications
October 06, 2008
While the Microsoft juggernaut has been touting the joys of its new Windows HPC Server 2008, the Linux HPC contingent has been somewhat less vocal of late. Or at least less organized. Unlike Linux vendors, Microsoft's monopoly of its OS means any HPC product it comes up with is, by default, a Windows standard. There are plenty of Linux-based HPC software stacks out there, but with the maturation of the high performance cluster market, many customers are becoming more interested in platform stability and robustness than choice.
The big Linux server players -- Red Hat and Novell -- have left integrated HPC solutions up to OEMs, system integrators, or the users themselves. This worked fine when most HPC customers lived in government supercomputing labs or large commercial datacenters, and had enough Linux expertise to choose the right system software stacks or assemble their own. But for the rapidly growing low end of the HPC market, the diversity of Linux solutions is an unwanted distraction.
The Red Hat HPC solution announced last Friday is a recognition that the HPC customer base is growing less sophisticated overall and is demanding more integrated solutions. The new HPC product is a combination of Red Hat Enterprise Linux (RHEL) and Platform Computing's Open Cluster Stack (OCS) 5. The open source OCS brings in a cluster manager, cluster file system support, a workload and resource manager, and a variety of HPC utilities. The Red Hat product was developed from a partnership announced last November, and no doubt encouraged by Microsoft's move into the HPC arena with its original Windows Compute Cluster Server 2003 offering.
Up until now, Red Hat offered RHEL for HPC compute nodes, which was basically the company's cut-down version of Linux tuned for 64-bit computing. That product has been quite popular with large government and university research labs, which tended to do their own integration of middleware and other cluster tools. The new Red Hat product is for HPC newbies and other price-conscious customers that have little interest in integrating and maintaining HPC system software, and are looking for a more traditional support relationship with their OS and hardware vendor.
Red Hat will bundle and sell the new offering as a subscription service and be the lone point of contact for support. "The customer only has to deal with one vendor to get his complete solution stack, instead of having to put it together himself and worry about all the IP plumbing with multiple vendors," said Red Hat product marketing manager Gerry Riveros, stealing a line out of Microsoft's playbook. Directly addressing the Windows competition, Riveros added: "We give you better performance and stability at the operating system level and we combine that with more powerful cluster and workload toolsets than what is delivered by Microsoft." Supposedly you can get a Red Hat HPC cluster up and running after only 10 screens of installation menus.
The level of usability, performance and robustness remains to be discovered, but Red Hat seems determined not to cede the low end of the HPC market to Microsoft. The Red Hat HPC package is offered at $249 per node per year, which includes tech support, bug fixes and security patches. Microsoft's HPC server costs $475 to license, with software maintenance available as an add-on for perhaps 25 or 30 percent more. It's not clear if anyone will ever pay these prices, since in the Window HPC server 2008 product will only be available through volume licensing programs, OEMs and a Service Provider License Agreement (SPLA) -- whatever that entails. As of this writing, these pricing schemes have not been elaborated.
Like Microsoft, the Red Hatians are sure to discount per-node pricing for bigger clusters too. "Whatever Microsoft is going to discount off their list price, we're going to match that discount," Riveros told me. This aggressive stance is reflected by the fact that Platform offers OCS in isolation for $150, so even at the base price of the HPC product, for just $100 more you get a Linux subscription for a two- or four-socket server.
Most of Red Hat's big customers in government labs and the oil and gas industry will probably stick with the stand-alone version of RHEL, and build their own customized HPC stacks around the OS. Like Microsoft, Red Hat is looking to tap the fast growing department and workgroup HPC segments for their integrated product. However, success is not guaranteed for either vendor, since the lowest end of the HPC market is probably the least understood.
Especially in the workgroup segment, there's a possibility that users (and ISVs) will forego clusters and wait for personal SMP workstations to provide supercomputing capability. The advent of quad-core -- and soon eight-core -- processors from Intel and AMD, along with the introduction of teraflop-level GPU computing, means that technical applications can now tap into a lot of FLOPS without resorting to MPI, job schedulers, and cluster management tools. The Visual Supercomputer that Velocity Micro announced on Monday, gives us some idea of the potential of non-clustered HPC platforms.
But even if supercharged technical workstations eat away some of low end of the HPC server market, the shear volume of existing MPI codes guarantees that cluster computing will be mainstream for some time to come. And with Red Hat now offering an alternative to the Windows platform, many server makers will likely end up offering both products on their hardware. At launch, Dell was announced as the initial OEM partner, but Riveros said they're already talking with other system vendors.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.