Customer-facing and transaction-oriented applications are often built using Web-based and/or service-oriented architectures, running in open source environments such as JBoss or on leading commercial application servers including BEA WebLogic or Oracle Application Server. These applications experience volatile demand for system and data resources, with processing requirements that vary significantly throughout the day.
The demand for improved response time creates an insatiable appetite for more compute power. IT organizations have traditionally answered this call by adding system resources to support increased transaction volumes and to meet associated service level requirements. This approach has had a number of negative effects, such as:
- Excess Capacity: Adding servers, storage and bandwidth each time a new application is installed contributes to an already underutilized pool of computing resources.
- Increased Costs: Building out a siloed, dedicated infrastructure can be expensive to deploy and cumbersome to maintain.
- Limited Growth: Resources are dedicated to specific applications, limiting the ability to take advantage of excess capacity by horizontally scaling the infrastructure or dynamically allocating resources.
This approach limits the business' ability to respond to market needs — especially when it can take weeks or months to rollout hardware and software in support of new applications. Many organizations recognize the need for software that can help them utilize existing capacity and improve the performance and reliability of their J2EE applications.
Extending from Compute-Intensive to Transactional Applications
A recent article in NetworkWorld acknowledges the evolution of Grid beyond its roots in HPC or “compute-intensive” applications.
“As Grid computing enters more enterprise environments, the buzz over the technology's potential never ceases. Once Grids are installed, network executives find them useful for a far wider variety of applications than just computationally heavy ones. They also work well for applications that have high transactional volumes or are data intensive. And after sending those apps to the Grid, it dawns on these early adopters that what they have is a giant, powerful — and comparatively inexpensive — next-generation generic application server.” (NetworkWorld, Sept. 26, 2005)
When applied to applications built in J2SE, J2EE and .NET, Grid middleware can enable transparent scaling across the computing infrastructure.
The goal is to create an application provisioning and runtime environment that enables applications to be virtualized from hardware infrastructure. Instead of applications being configured and provisioned on specific computers, applications are configured to run on an application service fabric without identifying the exact set of computers that they will run on. This creates a highly adaptive environment for running applications on a shared set of computers or Grid. In this virtualized environment, automated provisioning decisions are based on usage policies created for each application. These usage policies describe the measurable attributes of the application, including response time or throughput, and rules such as minimum and maximum resource utilization.
In this scenario, applications in need of more compute-power to accommodate volatile workloads are provisioned automatically (based on business policy) at runtime. Traditionally, J2EE applications were limited to manual provisioning across fixed, siloed clusters. When distributed over a Grid, it is possible to automatically provision additional, available system resources to the Grid in order to satisfy demand. This allows for IT departments to scale with ease, aligning IT with business goals and dramatically improving service levels.
Benefits of Grid-Enabling Transactional, J2EE Applications
- Policy-based Provisioning and Activation: System resources are allocated at runtime, based on priority, to ensure that service levels are aligned with business goals. The provisioning and scheduling is driven by service level policy that can be defined by application or groups of resources. The result is a more agile and responsive IT infrastructure.
- Seamless Virtualization of Java Services: This includes the ability to virtualize applications built using “plain, old Java objects” and Enterprise Java Beans. Supporting both programming models provides the flexibility companies need to seamlessly and transparently run their applications in an adaptive Grid infrastructure.
- Service Level Management: By monitoring and managing key metrics such as throughput, latency, resource usage and exceptions, the Grid software can take corrective action by dynamically provisioning resources to address service levels that may be in breach, assuring compliance and optimized service level management.
- Rapid Application Deployment: The goal is to deploy cross-language, versioned applications and environment settings, with a facility for rolling out new applications or components to distributed resources — while the Grid is still actively running other applications. This allows for many applications and different versions of the same application to run in the same distributed environment.
- Unlimited Application Scalability: Additional processors can be simply added to the compute pool, without limit, and made available for work automatically, on demand. As additional computing power is needed, the Grid software dynamically provisions resources on the new servers, enabling applications to be effortlessly and transparently scaled.
- Massive Task Throughput Rates: One of the most important requirements in transaction processing is the ability to handle high volumes of requests per second. Ideally, hundreds of thousands of tasks per second are flowing through the adaptive Grid infrastructure.
Summary
New technology is available to bring the benefits of Grid computing — radically improved service levels at significantly lower cost — to J2EE applications. When applied in these transactional environments, it is possible to achieve horizontal scale that optimizes resource utilization and reduces the need for more hardware. Dynamic service provisioning allocates resources based on business policy to assure maximum productivity. Ultimately, this is a logical evolution that can make the vision of “on-demand computing” an enterprise reality — by deploying Grid resources as needed and where needed to meet demand.
About Kelly Vizzini
As chief marketing officer at DataSynapse, Kelly Vizzini works to leverage the company's existing successes and domain expertise to build a brand identity that positions DataSynapse as the de facto standard in the U.S. and European markets for distributed computing solutions. Prior to her role at DataSynapse, Vizzini held marketing positions at several software companies including Prescient, Optum, Metasys and InfoSystems. She holds a bachelor's degree in journalism and communications from the University of South Carolina.