Entering the world of the Grid can be an exciting decision process. There are several dozen vendors with different products, several sets of standards from various bodies, internal staff training to consider and an analysis of the current state of business to take into account when determining what direction to go.
There are many issues which must be considered when considering a Grid deployment. Among these are:
- Organizational security requirements.
- Data sensitivity.
- CPU peak requirements and duty cycles.
- Data storage.
- Internet bandwidth availability.
- Existing resources.
- Custom resources.
- Potential for porting.
- Potential for partnering.
Dealing with each in turn is well beyond to scope of this article, but it is worth remembering that that scope of Grid solutions touches many different areas of an enterprise. For this article, we will focus on CPU requirements and the balance that can be struck between TCO and improved performance with Grid based applications.
Every manager with enterprise level responsibility is familiar with the challenges associated with deploying a number of applications, each of which have different requirements. \In traditional silo based architectures, IT managers have been told by applications providers to make certain they have enough hardware to run their applications under peak loads. The new ERP application needs a hundred nodes. The new database another 60. Do this a few times and pretty soon there are large, independent clusters sitting in a variety of silos throughout the organization.
By doing so over multiple application domains, the end effect is many machines running many applications that are all over-provisioned. By the time a total inventory is conducted and usage levels taken into account, it isn't uncommon to find that a given enterprise is a factor of five or 10 over-provisioned. This is not only an inefficient allocation of money from a capital standpoint, but there are scaling issues that become more cumbersome in that environment, as well.
The capital costs of such an operation can be staggering to consider. In a simple case, a company that has $10 million in clusters supporting these various silo applications might be spending $8 million more than they need to in order to gain the expected level of performance. Using a Grid tool to implement a service oriented infrastructure can help dramatically reduce these costs.
Figure 1
Figure 1 shows the simplest case of such an infrastructure. The organization has two applications, each of which requires a number of servers to operate. There is a total of 26 servers in this organization. Each of these clusters is utilized at only a 20 percent duty cycle. There is a tremendous amount of power sitting idle most of the time. This is a great example of a situation where the power of Grid technology can be utilized to take better advantage of resources. In business terms, this is an example of how to maximize TCO for the data center.
Figure 2
The ability to optimize the number of servers required to run these two applications is one way to utilize Grid technology. In this example, by logically combining these two server farms into one and allowing services like Globus GRAM to submit jobs to the unified whole, the server count can be reduced from 26 to 18 with no loss of capability. The number of servers necessary to meet the maximum load will still be available thanks to the differing needs of the applications. Day to day operation will also be more resilient in the face of failure thanks to the greater number of machines available in the pool.
The above example represents a new thought for many joining the Grid community. Too many people think of the Grid as a cycle scavenging tool which can be run on many workstations to create a virtual supercomputer. While this is clearly possible with a number of different products, it is only one part of the power of the Grid. As this case demonstrates, there can be easy to quantify TCO reduction in purpose-built facilities that harness the power of the Grid.
Thinking further down this line, it is also possible to maximize efficiency in another direction. Instead of removing servers and their associated expense from the operating budget, there are many scenarios in which it can make more sense to spend just as much money, but gain larger productivity from those purchases.
Figure 3
Figure 3 shows one possible implementation of resources in this larger deployment. In this case, instead of buying fewer machines, the enterprise is buying as many as ever, but they are utilizing them through a Grid layer. This allows both users to take advantage of this much larger set of resources. In Figure 1, they had access to a maximum of 14 servers. This was enough for peak load, but then left nothing extra for opportunistic work. In Figure 3, because each user group has access to up to 26 nodes, they can make requests for work which they otherwise wouldn't be able to do. In some instances, this means doing jobs which otherwise wouldn't get run. In other instances, this allows jobs to be executed with greater precision thanks to the ability to harness more CPU power to the task.
Another thing that Figure 3 points out is that the Grid layer is above that of the local schedulers which already exist. As a practical matter, this means that Grid technology can be adopted without unnecessary perturbation of the existing systems. Additionally, there is no need to mandate a technology solution which spans the entire enterprise.
There are a lot of different ways to think about the CPU challenges that exist when preparing for the Grid. These are just a couple. Grid toolkits like Globus allow for a number of different use cases, so explore and create. There is a lot of potential in this space.
About Rich Wellner
Rich Wellner is the Enterprise Architect for Univa Corp, which specializes in Globus solutions for large scale challenges. He is the author, with Pawel Plaszczak, of the upcoming book “A Savvy Managers Guide to Grid Computing.” He can be reached at [email protected].