Visit additional Tabor Communication Publications
September 08, 2006
The Road to Cluster Virtualization
As cluster use in enterprises grows, the need for commercial grade high performance computing that scales on-demand to adapt to ever-changing workload requirements and provide optimal system utilization is also growing. These needs in turn have driven many useful innovations. However, there has remained a fundamental assumption that a cluster or grid configuration is provisioned as a static, disk-based, full operating system installation on every single server.
This assumption leads to extensive scripting and middleware in an attempt to abstract out the complexity of managing hundreds or thousands of servers. In reality, this outdated approach only masks the complexity, without removing the underlying problem, and magnifies the operating costs of managing and maintaining large pools of servers. Rethinking these fundamental concepts can yield surprising results that can eliminate the very complexities many software "solutions" strive to merely camouflage. The key is Cluster Virtualization.
What is Cluster Virtualization? The term "virtualization" is a heavily used term these days, but the most common understanding of it is "to abstract the complexities of many -- presenting the simplicity of one". The result of Cluster Virtualization is the vastly simplified deployment and management of large pools of servers, accomplished by making very large groups of servers appear and act like a single system, as easy to manage as a single workstation. The financial and efficiency benefits of this approach are extremely compelling -- making Cluster Virtualization the most practical and cost-effective methodologies for reducing the complexity, cost and overall administrative burden of large scale computing -- enabling you to get the most out of your computing resources.
Today most clusters are based on the Beowulf design developed by Thomas Sterling and Donald Becker, chief technology officer at Penguin Computing, while the two were at NASA. A Beowulf cluster is a group of usually identical commercial off the shelf (COTS) computers running Linux and other open source software, to create a straightforward, scalable platform at from one tenth to one third the capital cost of traditional supercomputer.
What Becker realized, and what lead to the development of Cluster Virtualization architectures such as Scyld ClusterWare, was that while the original approach was straightforward and cost effective on the capital side, the complexity and operational costs grew in direct proportion to the size of the cluster. He found that by re-architecting the foundation of cluster software based on three basic principles, the complexity and thus the cost could be dramatically reduced. Those principles are:
Leveraging these architectural concepts has a tremendous ripple effect on rapid provisioning, manageability, scalability, security and reliability within the cluster. The result is an elegantly simple and powerful new paradigm for clustered computing, eliminating multiple levels of cost and support, while dramatically increasing efficiency and reducing operating costs to deliver a dependable HPC service to your organization.
Daring To Go "Diskless"
With the maturity of high speed networking and network booting mechanisms, along with the growing complexity of managing large pools of servers, the number of IT architects recognizing the benefits of "stateless provisioning" (i.e., direct to memory via a network boot) of the server operating environment is growing. The fact is that it is dramatically easier and faster to provision and manage large server pools when you simply decide never to install a full operating environment to the hard disks.
A full OS installation to the disk drive is relatively slow, generally taking 15 to 30 minutes to complete, depending on a variety of factors. Then there is often a considerable amount of hand configuration of services, user and remote access authentication that must happen after installation.
On the other hand, the Cluster Virtualization approach to provisioning has proven over many years to be a far more effortless and reliable mechanism. In this approach, the installation of the Linux OS and the cluster management software is done only on the designated Master node, regardless of the size of the cluster. The compute nodes are then auto-provisioned with a cluster-aware preconfigured operating system environment directly to memory and no further configuration is generally required since Scyld ClusterWare sets up the entire necessary configuration. The process takes approximately 20 seconds for each node and they are then ready to run.
This rapid provisioning really pays off during ongoing cluster operation. First of all, with diskless provisioning, a local compute node disk is only used if the application requires it. Since disk drives are among the least reliable and most power hungry components in the cluster, requiring them outright means you will have more compute node failures, in addition to higher capital and operational expenses.
Furthermore, a full disk-based install on a replacement or new server is going to take an additional 15-30 minutes and will generally have to be scheduled by IT staff, increasing the time to restore the cluster to full operation. With diskless provisioning, simply plug the node in and it is up and running in 20 seconds.
Next, when software updates are required, you apply them only on the Master, which will then re-provision the compute nodes automatically and quickly. The old method requires elaborate scripting at best and the chance that something goes wrong part way through. Then you have another problem version skew.
The correctness of applications that run on multiple compute nodes is often dependent on everything being precisely the same on each processing element. The tiniest difference in a driver or library can render the results useless, after days of calculation.
Depending on local full-install operating environments is one sure way to realize problems due to version skew. You may spend hours or days trying to figure out which node has the wrong version of something and you may end up reinstalling everything to get back to a known state.
Further optimizations on this concept of stateless provisioning are possible. This next section takes a look at the second architectural principle of Cluster Virtualization -- lightweight compute nodes.
Trimming Out The Fat
The dedicated nature of the cluster's compute resources means that it is unnecessary for a full Linux distribution to be provisioned to the compute nodes and there are many benefits to be had by right-sizing the compute node operating environment.
One of the immediate benefits of lightweight compute nodes is certainly performance. Part of the reason Scyld compute nodes are provisioned in under 20 seconds is that the operating system is significantly smaller than in traditional Beowulf-class designs. We have found that real-world applications generally run in an apples-to-apples comparison 5-10 percent faster with the Cluster Virtualization approach versus the traditional configuration.
Related to performance is memory usage, especially if insufficient memory forces an application to swap space. On a typical cluster compute node with 1 GB of RAM memory, tests using Scyld ClusterWare consumed less than 1 percent (8 MB) of memory compared to 40 percent (800 MB) for a full Linux installation. This can mean the difference between running completely in memory or hitting swap space. These improvements may seem small but in very large clusters with long running jobs even small improvements can have a substantial positive impact on an organization's ROI for their cluster.
A more significant issue is with respect to the scheduling latencies that many of the standard Linux services can introduce over long running applications. It has been shown that these scheduling latencies can cut cluster performance of real-world applications by 50 percent in large cluster configurations, and they can be impractical to isolate, as they are very application dependent.
Next, the Cluster Virtualization approach significantly enhances scalability as well, by employing a single primary daemon on compute slaves and leverages this daemon in order to, from the Master node, run jobs, get standard I/O, logs and statistics, etc. out on the slaves. Enhanced scalability is realized by the fact that compute nodes can be added effortlessly, on demand, as well as by optimizing common monitoring tools to leverage the architecture to simply collect cluster statistics directly from the Master node.
Finally, there is an important security side benefit to this architecture. The compute slaves of this type of cluster do not have most of the standard Linux daemons and do not have their own shell. The Master node has a special shell mechanism with which to send commands out to the slaves and thus this type of architecture is inherently more secure as the compute nodes cannot be logged into or attacked in any of the standard ways.
Making Virtualization Real
The whole point of this architecture is to make large pools of servers act and feel as if they were a single, consistent, virtual system. For example, Scyld ClusterWare employs a powerful technique, built upon the standard, out-of-the-box enterprise Linux distributions, to create "single system image" behavior with the Linux you already know. It does this by extending the Linux configuration on the Master node to have a single unified process space. From both the administrator and the user point of view, a 100-node cluster with 400 processors appears very much like a 400-processor SMP machine at the cost of commodity Linux x86 cluster computing.
The compute servers are fully transparent and directly accessible if need be, but the entire compute capacity is presented at the single Master node. Consider the example of the everyday task of issuing the ubiquitous "ps" process list command. What you get back is a listing of all processes running on all machines as if it were just one machine. You can still tell which processes are running where in the cluster if needed. Other standard Linux commands work in the same intuitive way as on a single machine.
You add users and set up passwords only on the Master. You submit jobs only on the Master and simply tell it how many processors you need (even non-MPI jobs). You terminate jobs only on the Master and automatically cleaned up on the compute nodes. Of course you can run jobs or general commands on specific nodes if you need to. If you need to see the vital statistics of load, memory usage, disk usage, etc. on any or all nodes, it is one command line or GUI invocation on the Master node and you get it.
With a virtualized cluster, you focus on the work throughput you need to achieve, not the fact that you have a cluster that requires massive scripting to iterate commands over each and every node.
The Cluster Virtualization Advantage
We began this article by noting that the top challenges facing any enterprise managing a guaranteed high performance computing service are getting the right solutions to deploying and managing these resources, scaling on-demand to ever-changing workload requirements, and achieving the highest utilization levels matched to business priorities.
Built upon industry standard Linux distributions, Cluster Virtualization extends the operating system platform to deliver an elegantly simple and powerful new paradigm of clustered computing. This new paradigm eliminates the need for multiple levels of cost and support and delivers everything needed for users and administrators to be productive immediately, running HPC applications out of the box. It also dramatically increases efficiency and reduces operating costs while delivering a dependable HPC service to your organization, thereby maximizing the return on investment for Linux clustering in your highly competitive business environment.
Robert (Bob) Monkman has been in the computing industry for 20 years. He is currently the director of Software Product Management at Penguin Computing. Previous to joining Penguin, Robert has held a variety of product marketing and management roles at MontaVista Software, Eternal Systems, OSE Systems, Wind River Systems, Microtec Research specializing in operating systems, communication infrastructure, clustering and high availability. Robert has also held applications engineering and hardware/software development roles at Ready Systems and Tellabs. Robert holds a BSEE degree from the University of Illinois.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.