GRIDtoday spoke with Appistry CEO Kevin Haar about the complementary nature of Grid computing and virtualization, and how these two technologies — along with some others — play a key role in creating an application fabric.
GRIDtoday: Appistry is a company that offers the capabilities of both Grid computing and virtualization. Can you define each of those terms from your point of view?
KEVIN HAAR: Originally, Grid concepts conjured up images of “cycle-scraping” across hundreds or thousands of desktop processors for research applications. Today, Grid has evolved dramatically, and enterprise customers are primarily interested in deploying their large-scale, time-critical applications across a pool of commodity-grade computers, inside and across data centers.
That's where virtualization comes in. In order to remain agile, operating and developing for these resource pools must be dramatically simplified. In other words, developers and operations staff want to see the grid as one thing, not as a bunch of individual components. We call this “scale-out” virtualization — in contrast to “server” virtualization as popularized by VMWare and others — and believe it is necessary to dramatically simplify the task of deploying, managing and developing for very large numbers of processors.
Gt: Individually, what is it about each technology that makes it so beneficial to enterprises? What kinds of additional benefits and capabilities can be seen when both are present in one solution? What makes Grid computing and virtualization such complementary technologies?
HAAR:Individually, Grid allows the reduction of capital costs, and “scale-out” virtualization slashes operational costs, by dramatically simplifying development, deployment and management of large-scale, time-critical applications. What's more important, though, are the strategic advantages to be had when these technologies are brought together to enable a new set of capabilities.
Fundamentally, the amount of data available to today's enterprise is staggering. And enterprise competitiveness depends on the ability to turn that data into meaningful decisions and insights, which in turn allow companies to create better products, identify new opportunities and better serve customers.
Together, Grid and virtualization can solve a major piece of this problem by making processing power less expensive both from an acquisition and ongoing management perspective. This will enable the ubiquitous deployment of real-time business analytics and enterprise high performance computing.
Gt: So, is a fabric simply a combination of these two technologies, or are there other capabilities that need to be present for a solution to truly be called a fabric?
HAAR:We believe a true “application fabric” combines these two ideas with two other important technologies.
The first of these, what we call “application-level fault tolerance,” allows the applications running in the fabric to be extremely tolerant of infrastructure failure. The reality is, when you have tens, hundreds or thousands of commodity components, things break! An application-level fault tolerance ensures that the application can survive hardware, OS or network failures.
The other technology, or rather set of technologies, provides for the automated management of a fabric and its applications. Things like “assimilation” — the automated provisioning of new nodes — and rolling updates, which allows OS and/or application updates to be deployed to a running fabric are key elements to making large systems easy to manage.
Individual vendors differentiate themselves within this framework both in the way they provide the base technologies as well as additional features they layer on top. Our product, for example, Appistry Enterprise Application Fabric, is unique in its cross-platform support for .NET, Java and C/C++ applications on Windows or Linux, as well as the additional features we provide such as integrated data caching.
Gt: From where does the term “fabric” come? Is the metaphor based on how — like the threads in a real fabric — the individual machines appear and act as one?
HAAR:That's an important part of it: that a large number of machines look and act like one … a unified fabric of processing power so to speak. It's also a metaphor for the business and technical agility — flexibility — that the technology unlocks for customers. In those ways, an application fabric is kind of the antithesis to “big iron.”
Gt: Where do you see fabric computing fitting in the overall Grid landscape? Do you see it becoming increasingly popular as companies look to adopt Grid-type technologies that best fit their needs?
HAAR:We believe that application fabrics enable the next generation of Grid computing for the enterprise. Today, application fabrics enhance grid scalability with virtualization, fail-over and operational automation, making it a great fit for traditional “Grid apps” like image processing, financial simulations, route optimization, etc. However, application fabrics bring the same powerful characteristics to real-time analytical applications and high-volume transactional applications creating “real-time grids.” Real-time grids — enabled by application fabrics — will become a powerful tool for the enterprise as a highly agile, scalable and reliable service execution environment built on commodity infrastructure, resulting in tremendous economic benefits.
Gt: Finally, while the idea of fabrics — and even grids, for that matter — is still trying to make a name for itself, virtualization seems to have caught on pretty well already. How ubiquitous do you see virtualization becoming, and how will this ubiquity affect the adoption rates of higher level technologies like fabrics and grids?
HAAR: Virtualization in the data center has a bright future. Today, data center deployment of server virtualization is primarily focused on “consolidating” departmental apps onto mid- or high-end servers. It makes sense to us that this use has caught on first, because there are more of these applications and the complexity is low. But the various aspects of virtualization — server virtualization, storage virtualization, scale-out virtualization — are all expressions of the same desire to abstract physical infrastructure into resource pools across which applications and data can be deployed. So, we see server virtualization deployment as a leading indicator for grid and fabric deployment. As enterprises get more comfortable with the technology, they'll seek to gain the same agility and low cost for their mission critical, and more costly, applications that they are starting to see for their departmental apps.