Visit additional Tabor Communication Publications
September 06, 2011
Erlang Solutions and Massive Solutions will soon launch a new cloud platform for high performance computing. Last month they announced their intent to bring a virtual supercomputer (VSC) product to market, the idea being to enable customers to share their HPC resources either externally or internally, in a cloud-like manner, all under the banner of open source software.
The platform will be based on Clustrx and Xpandrx, two HPC software operating systems that were the result of several years of work done by the the two European HPC companies.
The platform will be based on Clustrx and Xpandrx, two HPC software operating systems that were the result of several years of work done by Erlang Solutions, based in the UK, and Massive Solutions, based in Gibraltar. Massive Solutions has been the driving force behind the development of these two OS's, using Erlang language technology developed by its partner.
In a nutshell, Clustrx is an HPC operating system, or more accurately, middleware, which sits atop Linux, providing the management and monitoring functions for supercomputer clusters. It is run on its own small server farm of one or more nodes, which are connected to the compute servers that make up the HPC cluster. The separation between management and compute enables it to support all the major Linux distros as well as Windows HPC Server. There is a distinct Clustrx-based version of Linux for the compute side as well, called Compute Based Linux.
Since it runs on top of Linux, Clustrx is also is hardware-independent. And thanks once again to the separation between management and compute, it can be set up to work with virtually any HPC architecture. With the proper configuration and some minor software tweaking, it could even manage custom supercomputers, like IBM's Blue Gene.
So far, Clustrx has been installed on a half a dozen machines in Europe, mostly those from Russian HPC vendor, T-Platforms. The most notable deployment is on the 674-petaflop Lomonosov supercomputer at the University of Moscow.
The sequel to Clustrx is Xpandrx, basically a superset of the former that has incorporated hypervisor capabilities. This addition makes it capable of creating multiple virtual supercomputers across one or more connected clusters. Essentially it does what Clustrx does -- delivers an operating environment and software to compute nodes, schedule jobs, monitors execution, etc. -- but does so within a virtual machine environment. By doing so it makes an entire datacenter, heterogenous or otherwise, behave as a single system.
Xpandrx instances use a lightweight virtualization scheme to maximize performance. It supports two virtualization flavors: Linux containers and Kernel-based Virtual Machine (KDM). According to Massive Solutions founder Viktor Sovietov, in both cases, overhead is low -- less than one percent for Linux containers, and for KDM, not larger than 4 percent. That's important for supercomputing applications that need to squeeze as much performance out of the hardware and software stack as possible.
The only limitation to this model is its dependency on the underlying capabilities of Linux. For example, although Xpandrx is GPU-aware, since GPU virtualization is not yet supported in any Linux distros, the VSC platform can't support virtualization of those resources. More exotic HPC hardware technology would, likewise, be out of the virtual loop.
The common denominator for VSC is Erlang, not just the company, but the language http://www.erlang.org/, which is designed for programming massively scalable systems. The Erlang runtime has built-in to support for things like concurrency, distribution and fault tolerance. As such, it is particularly suitable for HPC system software and large-scale interprocess communication, which is why both Clustrx and Xpandrx are implemented in the language.
The companies will eschew the traditional licensing model, expecting instead to monetize the offering by taking a cut of money charged for the computing service. This will be enabled by built-in billing platform that can rate usage and monitor performance. Although the particulars have not been worked out, the general idea is to support business models between adopters, while at the same time driving revenue for Erlang and Massive. The two vendors will also charge for professional services for VSC set up, customization and technical support.
According to Marcus Taylor Marcus Taylor, commercial director at Erlang Solutions, the two companies will be delivering the Xpandrx software and expertise to build a virtual supercomputer sometime in early 2012. The intention is that third-party HPC providers (labs, universities, companies) will band together and use the VSC software on their pre-existing HPC infrastructure. Taylor says the early adopters will most likely be UK universities and computing centers who could greatly benefit from sharing their HPC resources amongst themselves.
The UK focus is being driven by the government and, in particular, the country's Technology Strategy Board, whose mission is to accelerate economic growth by stimulating and supporting business-led innovation, in this case by enabling greater access to high performance computing. Taylor says, there have also been inquiries about VSC from the oil & gas companies and engineering firms, presumably for in-house private HPC clouds.
Whether such a scheme works or not remains to be seen. Coming up with a general-purpose HPC-as-a-service model has remained elusive, despite a number of vendor solutions that have appeared over the past few years. In favor of VSC is the open source nature of the solution and its independence from the underlying OS and hardware. If it can be unobtrusively layered on top of existing HPC resources, so much the better. Proof points await.
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.