Oracle Grid Director on Managing Large Deployments

By Nicole Hemsoth

October 9, 2006

In this Q&A, Oracle director of Grid computing, Bob Thome, discusses the complexity, management and security issues that arise when implementing Grid infrastructures, and why Grid is still worth the effort. Interestingly, however, Thome cites political and cultural issues the No. 1 obstacle to Grid deployment.

GRIDtoday: To begin, can you give me some examples of large grid implementations, either public or private, on which Oracle has worked?

BOB THOME: Oracle infrastructure software is used in a variety of large grid implementations. Many customers have built large custom grids for use in research, and Oracle software can be found throughout their environments, often used as repositories to manage users and resources within the grid. For example, CERN has built a large grid to collect, distribute and analyze data captured by their Large Hadron Collider in Geneva. Oracle software is integral to managing these vast amounts of information.

However, most of our interest is not in the large grids built by research and academia, but rather grids built to run an enterprise’s business infrastructure. In that instance, the grids are not running grand challenge compute jobs, but are running databases, business applications, Web servers and application servers. 

Consider as an example, Gas Natural, a leading natural gas operator and electricity provider operating across the globe. Since 2003, they have migrated a variety of mainframe applications to a grid based on Oracle infrastructure software. Their grid is built using standard, off-the-shelf components, for example, HP Linux servers. First, they migrated a saturated 2.2TB data warehouse to a clustered Oracle Database running in their grid. Their data warehouse runs on eight nodes within their grid. Their cost has been reduced by a factor of 10 while queries now run 52 times faster  Next, they extended their grid to host an SAP Business Information Warehouse. They migrated this system to seven Linux nodes in their grid. This resulted in a tremendous performance improvement — one query went from 83 minutes to 72 seconds. Presently, they are moving their custom electrical market, Siebel and SAP transactional systems to the grid. By 2005, they had 66 Linux servers in their grid, and they expect that number to double.

Gt: What are some of the key factors — or obstacles — organizations should keep in mind when deploying large Grid infrastructures? What examples can you give from the aforementioned deployments?

THOME: There are a few key factors organizations should keep in mind when deploying a grid. Perhaps the biggest is political or cultural. Many organizations are accustomed to controlling their IT assets, and the concept of losing that direct control can be worrisome. 

While there are clear benefits such as access to additional resources for less cost, many business units will still resist the loss of control. They are concerned that the shared resources will not be available to them when they need them. In such cases, the successful organization will have a strong mandate from the top to move to this architecture. 

Also, while it’s possible to build a grid using existing resources from within the enterprise, most enterprises find this isn’t worth the trouble. Confiscating resources from individual departments aggravates the political issues and makes the transitions more difficult. Given these transitions take time to implement, and given the leaps that are made every year in hardware performance and efficiency, it’s often better to buy new (latest and greatest) servers for the grid.

Management of all the servers in a grid also requires some care. For example, Gas Natural found that once they implemented their grid, they had many more servers to manage and monitor. The old methods of system management were no longer effective. Fortunately, vendors such as Oracle have responded with products that are much more adept at managing and monitoring these grids. Oracle Enterprise Manager 10g, for example, allows an administrator to manage all their Oracle Applications, databases, application servers and the hosts that support them. The solution can manage servers as a group, performing a single action (such as patching) against multiple servers, databases and application servers.

Gt: How difficult is it to manage complexity in these large environments? How should organizations plan for this concern?

THOME: As mentioned above, the old ways of managing servers individually do not scale. Administrators need tools to manage and monitor groups of servers. They need automation to eliminate the more mundane tasks.  Administrators also need to start managing services and service level objectives rather than individual components.  Oracle Enterprise Manager 10g, for example, can monitor metrics on the various components and services within the grid and notify the administrator should an exception occur. 

Gt: What about security? How do issues surrounding security change or grow as grids get bigger?

THOME: Grid environments do impose new requirements on security. These requirements are not necessarily related to size, but more to the security policies of the users involved. 

For example, enterprises like Gas Natural keep their servers in their datacenter and behind a firewall. The organizations sharing the resources in the grid are all part of the same enterprise and have some assumed level of trust. This dramatically simplifies the security problem relative to a grid that would span multiple organizations. 

On the other hand, there is a lot of interest in Grid from application service providers (ASP). Many of these providers have greater security concerns and need firewalls between the various components in their grid. They use network switches to build up virtual LANs to electrically isolate servers used by one “customer” from others. They use fibre channel switches to create zones for SAN storage, ensuring files are only available to authorized users. As resources are re-provisioned, care is taken to ensure they are scrubbed clean — no confidential data or malicious code is left behind for the next user.

Gt: Are there security concerns specific to particular types of grids (e.g., desktop grids, datacenter grids, international grids, etc.)?

THOME: Desktop grids are inherently not secure and only used for applications where the data is neither confidential nor replaceable. What drives security is not whether the grid spans one data center, or six international data centers. What drives security is whether the users in the grid — be they in a single data center or multiple data centers — trust each other.

Gt: How do security concerns vary between commercial and research organizations?

THOME: You may think commercial organizations would have more security concerns, but at this stage in grid deployments, most commercial grids are safely within the enterprise. It’s the research and academic users who are trying to build grids that span many users from many organizations — with no single span of control. Many of these grids have therefore had to develop more sophisticated security solutions to not only protect data, but also restrict usage of resources within the grid.

Gt: What about among the various vertical markets within the commercial sector?

THOME: I don't see large variations in security concerns among various vertical markets. However, if you consider the ASP or hosting market as a vertical, you introduce the complexities of multiple users that are not trustworthy sharing resources in a grid.

Gt: Speaking specifically about Oracle's database business, how is database management affected in large Grid environments? What has the company done with 10g in order to maximize simplicity along this front?

THOME: We did a lot in Oracle Database 10g to facilitate management of grids. To begin, we introduced a great deal of self-management features directly into Oracle Database 10g. The easiest thing to manage is the thing that manages itself. 

We also introduced Oracle Enterprise Manager 10g, which can manage many databases, application servers and Oracle Applications and their underlying hosts as a group. In example, run Oracle Enterprise Manager 10g's patch wizard once and it will offer to schedule patching on all databases. Oracle Enterprise Manager 10g has features to facilitate change and configuration management. It can compare configurations and clone configurations to simplify provisioning. And, it has a great deal of management and monitoring features that allow database and system administrators to easily monitor service metrics and receive notifications of service level exceptions.

Gt: Given all the complexity and, to a lesser degree, security issues inherent in Grid implementations, why would an organization want to deploy a grid? What benefits come from these deployments, and how do they outweigh any concerns or obstacles?

THOME: There are three key benefits to Grid computing. First, you can get better information faster. You can bring resources to bear on your business problems as needed. If you have a fixed amount of time, say 24 hours to perform an analysis, you can bring in additional resources to perform better or deeper analysis. And if you have a fixed amount of work, say a report to run, you can bring in additional resources and run it faster. 

Second, you can better align you resources with your business requirements. Enterprises have different business priorities by time of day, day of week, time of month, quarter and year. Also, priorities will shift over time. Sharing resources in a grid makes it easy to move resources from one workload to another, thereby aligning resources and business requirements. 

Lastly, you can save money. You can increase the utilization of your resources by sharing failover and peak capacity across applications, and you can use less expensive components — you can pool multiple smaller inexpensive servers in place of a larger, more powerful, server.

Gt: Is there anything else you'd like to add?

THOME: Flexibility is just as important as ROI and TCO — its benefits are just a bit harder to quantify. It gives customers the ability to cope with increasingly unpredictable workloads. This enables their business to quickly adapt to changes and prevents problems that arise when the business cannot adapt. However, this can be tricky to put a number on.

Grids provide many benefits today. You get better information faster because you can bring additional resources to bear on a problem to perform better analysis in a shorter period of time. You can better and more quickly align your resources with your business priorities, and save money by increasing utilization (i.e., share peak and failover capacity across applications) and by using smaller less expensive servers (that are then virtualized by the Grid layer to behave as a larger more expensive server). Although Grid technologies are under development, these benefits can be realized today.

One last thing about Grid is that it's easily adopted in an incremental manner. Customers can start small, and then grow their grid as they become more comfortable. You don't have to move everything to the grid all at once — grids and traditional architectures can coexist.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, and this day of contemplation is meant to provide all of us Read more…

Intel Announces Hala Point – World’s Largest Neuromorphic System for Sustainable AI

April 22, 2024

As we find ourselves on the brink of a technological revolution, the need for efficient and sustainable computing solutions has never been more critical.  A computer system that can mimic the way humans process and s Read more…

Empowering High-Performance Computing for Artificial Intelligence

April 19, 2024

Artificial intelligence (AI) presents some of the most challenging demands in information technology, especially concerning computing power and data movement. As a result of these challenges, high-performance computing Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

2024 Winter Classic: The Return of Team Fayetteville

April 18, 2024

Hailing from Fayetteville, NC, Fayetteville State University stayed under the radar in their first Winter Classic competition in 2022. Solid students for sure, but not a lot of HPC experience. All good. They didn’t Read more…

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

Intel’s Xeon General Manager Talks about Server Chips 

January 2, 2024

Intel is talking data-center growth and is done digging graves for its dead enterprise products, including GPUs, storage, and networking products, which fell to Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire