Visit additional Tabor Communication Publications
August 26, 2010
A lot of high performance computing is liable to end up in public clouds in the coming years. Besides avoiding the initial capital outlay for an HPC system, one of the biggest attractions to external clouds is being able to foist cluster administration on someone else. Despite the plethora of cluster management tools available, dealing with the complexities of HPC clusters is still mostly an art. And because high performance computing is somewhat of a niche, the number of people that can honestly claim the title HPC system administrator is relatively small.
But what if you require an in-house HPC set-up, for whatever reason (security, need for specific hardware, cost benefit of continuous usage, etc.) and still don't want to mess with the burden of cluster administration? It turns out there is an alternative.
A company called X-ISS offers a managed service that allows organizations to outsource system administration for in-house HPC clusters. Founded in 1993 by Deepak Khosla, X-ISS is a Houston-based IT management company that has been in the HPC management business for the last 10 years. They currently manage several thousand nodes across about 500 separate installations. Their HPC services product, appropriately named ManagedHPC, is mostly aimed at clusters between 4 and 256 nodes, but at least one customer has a system with a few thousand servers.
Apparently the company has made a decent business out their HPC services offering, which they characterize as "turnkey outsourced system management." According to Khosla, X-ISS has been growing at 20 percent per year over the past five years.
The company's pitch is that they let an organization focus on its core competency, whatever that domain happens to be, relieving the company of the bother of maintaining an HPC admin support staff. This is essentially the same advantage offered by the cloud model, but in this case, the infrastructure is customer-owned and on-site. The provided service manages all the hardware, cluster provisioning, and vendor support issues. If something goes wrong, the customer just calls up X-ISS or opens a ticket with them. "If you have people that are really interested in their core expertise, they can pretty much turn their cluster over to us," says Khosla.
Cluster support is performed remotely for the most part, requiring someone on-site only to perform physical reboots when the machine gets really stubborn. Application submissions and restarts are also handled locally, in much the same way as a cloud scenario. But if a fan fails or a memory chip gets fried, X-ISS calls the appropriate vendor to get it swapped out. Then X-ISS will reprovision the system remotely to get the system back up and running. When new software or patches need to be installed, the company does this as well. Of course, the appropriate vendor maintenance contracts need to be in place; X-ISS just takes over the task of managing that support.
As part of the service, X-ISS analyzes system usage and generates quarterly reports. To do this, they employ their home-grown DecisionHPC software to perform the monitoring, tracking and reporting. DecisionHPC is also sold as a separate product, its commercial availability having been announced last week. As with the service itself, data collection and analysis is performed at an X-ISS facility, only requiring a tunnel to the HPC machine to execute remote commands.
The company sells its HPC service both directly and through partnerships. X-ISS teams with Tier 1 vendors like as IBM, Microsoft and Dell to offer managed HPC services on top of the vendor's offering. According to Khosla, these vendors like that relationship because it removes a sale roadblock for fussy customers who might otherwise balk about having to add special admin support for a new cluster. X-ISS also deals directly with end users, especially in the oil & gas sector, taking advantage of its Houston locale to leverage that customer base.
ManagedHPC prices start at just over $1,000 per month for small clusters. The price calculation is based primarily on cluster size and the number of applications, but other factors, like special customer requirements and the environment, can also come into play. Khosla maintains this delivers good ROI for the customer since they don't have to bring in a full-time admin who needs Linux or Windows-based HPC expertise. Adding specialized IT staff can be a scary prospect for customers, he says. They'd rather just treat these machines like mainframes that come pre-wrapped with support.
HPC usage has plenty of challenges, and system administration is just one of them. However, it's worth pointing out that two of the most talked about pain points -- the price of HPC hardware and its energy consumption -- are actually trending in the customer's favor. The cost of FLOPS is declining and number of FLOPS/watt is increasing. At the same time, the cost of cluster management support will probably keep rising, since it depends upon human labor. As such, X-ISS may have sweet spot for customers who aren't yet ready for the cloud, but lack the expertise to support their own HPC systems.
Posted by Michael Feldman - August 26, 2010 @ 5:40 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.