Oracle Grid Director on Managing Large Deployments

By Nicole Hemsoth

October 9, 2006

In this Q&A, Oracle director of Grid computing, Bob Thome, discusses the complexity, management and security issues that arise when implementing Grid infrastructures, and why Grid is still worth the effort. Interestingly, however, Thome cites political and cultural issues the No. 1 obstacle to Grid deployment.

GRIDtoday: To begin, can you give me some examples of large grid implementations, either public or private, on which Oracle has worked?

BOB THOME: Oracle infrastructure software is used in a variety of large grid implementations. Many customers have built large custom grids for use in research, and Oracle software can be found throughout their environments, often used as repositories to manage users and resources within the grid. For example, CERN has built a large grid to collect, distribute and analyze data captured by their Large Hadron Collider in Geneva. Oracle software is integral to managing these vast amounts of information.

However, most of our interest is not in the large grids built by research and academia, but rather grids built to run an enterprise’s business infrastructure. In that instance, the grids are not running grand challenge compute jobs, but are running databases, business applications, Web servers and application servers. 

Consider as an example, Gas Natural, a leading natural gas operator and electricity provider operating across the globe. Since 2003, they have migrated a variety of mainframe applications to a grid based on Oracle infrastructure software. Their grid is built using standard, off-the-shelf components, for example, HP Linux servers. First, they migrated a saturated 2.2TB data warehouse to a clustered Oracle Database running in their grid. Their data warehouse runs on eight nodes within their grid. Their cost has been reduced by a factor of 10 while queries now run 52 times faster  Next, they extended their grid to host an SAP Business Information Warehouse. They migrated this system to seven Linux nodes in their grid. This resulted in a tremendous performance improvement — one query went from 83 minutes to 72 seconds. Presently, they are moving their custom electrical market, Siebel and SAP transactional systems to the grid. By 2005, they had 66 Linux servers in their grid, and they expect that number to double.

Gt: What are some of the key factors — or obstacles — organizations should keep in mind when deploying large Grid infrastructures? What examples can you give from the aforementioned deployments?

THOME: There are a few key factors organizations should keep in mind when deploying a grid. Perhaps the biggest is political or cultural. Many organizations are accustomed to controlling their IT assets, and the concept of losing that direct control can be worrisome. 

While there are clear benefits such as access to additional resources for less cost, many business units will still resist the loss of control. They are concerned that the shared resources will not be available to them when they need them. In such cases, the successful organization will have a strong mandate from the top to move to this architecture. 

Also, while it’s possible to build a grid using existing resources from within the enterprise, most enterprises find this isn’t worth the trouble. Confiscating resources from individual departments aggravates the political issues and makes the transitions more difficult. Given these transitions take time to implement, and given the leaps that are made every year in hardware performance and efficiency, it’s often better to buy new (latest and greatest) servers for the grid.

Management of all the servers in a grid also requires some care. For example, Gas Natural found that once they implemented their grid, they had many more servers to manage and monitor. The old methods of system management were no longer effective. Fortunately, vendors such as Oracle have responded with products that are much more adept at managing and monitoring these grids. Oracle Enterprise Manager 10g, for example, allows an administrator to manage all their Oracle Applications, databases, application servers and the hosts that support them. The solution can manage servers as a group, performing a single action (such as patching) against multiple servers, databases and application servers.

Gt: How difficult is it to manage complexity in these large environments? How should organizations plan for this concern?

THOME: As mentioned above, the old ways of managing servers individually do not scale. Administrators need tools to manage and monitor groups of servers. They need automation to eliminate the more mundane tasks.  Administrators also need to start managing services and service level objectives rather than individual components.  Oracle Enterprise Manager 10g, for example, can monitor metrics on the various components and services within the grid and notify the administrator should an exception occur. 

Gt: What about security? How do issues surrounding security change or grow as grids get bigger?

THOME: Grid environments do impose new requirements on security. These requirements are not necessarily related to size, but more to the security policies of the users involved. 

For example, enterprises like Gas Natural keep their servers in their datacenter and behind a firewall. The organizations sharing the resources in the grid are all part of the same enterprise and have some assumed level of trust. This dramatically simplifies the security problem relative to a grid that would span multiple organizations. 

On the other hand, there is a lot of interest in Grid from application service providers (ASP). Many of these providers have greater security concerns and need firewalls between the various components in their grid. They use network switches to build up virtual LANs to electrically isolate servers used by one “customer” from others. They use fibre channel switches to create zones for SAN storage, ensuring files are only available to authorized users. As resources are re-provisioned, care is taken to ensure they are scrubbed clean — no confidential data or malicious code is left behind for the next user.

Gt: Are there security concerns specific to particular types of grids (e.g., desktop grids, datacenter grids, international grids, etc.)?

THOME: Desktop grids are inherently not secure and only used for applications where the data is neither confidential nor replaceable. What drives security is not whether the grid spans one data center, or six international data centers. What drives security is whether the users in the grid — be they in a single data center or multiple data centers — trust each other.

Gt: How do security concerns vary between commercial and research organizations?

THOME: You may think commercial organizations would have more security concerns, but at this stage in grid deployments, most commercial grids are safely within the enterprise. It’s the research and academic users who are trying to build grids that span many users from many organizations — with no single span of control. Many of these grids have therefore had to develop more sophisticated security solutions to not only protect data, but also restrict usage of resources within the grid.

Gt: What about among the various vertical markets within the commercial sector?

THOME: I don't see large variations in security concerns among various vertical markets. However, if you consider the ASP or hosting market as a vertical, you introduce the complexities of multiple users that are not trustworthy sharing resources in a grid.

Gt: Speaking specifically about Oracle's database business, how is database management affected in large Grid environments? What has the company done with 10g in order to maximize simplicity along this front?

THOME: We did a lot in Oracle Database 10g to facilitate management of grids. To begin, we introduced a great deal of self-management features directly into Oracle Database 10g. The easiest thing to manage is the thing that manages itself. 

We also introduced Oracle Enterprise Manager 10g, which can manage many databases, application servers and Oracle Applications and their underlying hosts as a group. In example, run Oracle Enterprise Manager 10g's patch wizard once and it will offer to schedule patching on all databases. Oracle Enterprise Manager 10g has features to facilitate change and configuration management. It can compare configurations and clone configurations to simplify provisioning. And, it has a great deal of management and monitoring features that allow database and system administrators to easily monitor service metrics and receive notifications of service level exceptions.

Gt: Given all the complexity and, to a lesser degree, security issues inherent in Grid implementations, why would an organization want to deploy a grid? What benefits come from these deployments, and how do they outweigh any concerns or obstacles?

THOME: There are three key benefits to Grid computing. First, you can get better information faster. You can bring resources to bear on your business problems as needed. If you have a fixed amount of time, say 24 hours to perform an analysis, you can bring in additional resources to perform better or deeper analysis. And if you have a fixed amount of work, say a report to run, you can bring in additional resources and run it faster. 

Second, you can better align you resources with your business requirements. Enterprises have different business priorities by time of day, day of week, time of month, quarter and year. Also, priorities will shift over time. Sharing resources in a grid makes it easy to move resources from one workload to another, thereby aligning resources and business requirements. 

Lastly, you can save money. You can increase the utilization of your resources by sharing failover and peak capacity across applications, and you can use less expensive components — you can pool multiple smaller inexpensive servers in place of a larger, more powerful, server.

Gt: Is there anything else you'd like to add?

THOME: Flexibility is just as important as ROI and TCO — its benefits are just a bit harder to quantify. It gives customers the ability to cope with increasingly unpredictable workloads. This enables their business to quickly adapt to changes and prevents problems that arise when the business cannot adapt. However, this can be tricky to put a number on.

Grids provide many benefits today. You get better information faster because you can bring additional resources to bear on a problem to perform better analysis in a shorter period of time. You can better and more quickly align your resources with your business priorities, and save money by increasing utilization (i.e., share peak and failover capacity across applications) and by using smaller less expensive servers (that are then virtualized by the Grid layer to behave as a larger more expensive server). Although Grid technologies are under development, these benefits can be realized today.

One last thing about Grid is that it's easily adopted in an incremental manner. Customers can start small, and then grow their grid as they become more comfortable. You don't have to move everything to the grid all at once — grids and traditional architectures can coexist.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire