Oracle Grid Director on Managing Large Deployments

By Nicole Hemsoth

October 9, 2006

In this Q&A, Oracle director of Grid computing, Bob Thome, discusses the complexity, management and security issues that arise when implementing Grid infrastructures, and why Grid is still worth the effort. Interestingly, however, Thome cites political and cultural issues the No. 1 obstacle to Grid deployment.

GRIDtoday: To begin, can you give me some examples of large grid implementations, either public or private, on which Oracle has worked?

BOB THOME: Oracle infrastructure software is used in a variety of large grid implementations. Many customers have built large custom grids for use in research, and Oracle software can be found throughout their environments, often used as repositories to manage users and resources within the grid. For example, CERN has built a large grid to collect, distribute and analyze data captured by their Large Hadron Collider in Geneva. Oracle software is integral to managing these vast amounts of information.

However, most of our interest is not in the large grids built by research and academia, but rather grids built to run an enterprise’s business infrastructure. In that instance, the grids are not running grand challenge compute jobs, but are running databases, business applications, Web servers and application servers. 

Consider as an example, Gas Natural, a leading natural gas operator and electricity provider operating across the globe. Since 2003, they have migrated a variety of mainframe applications to a grid based on Oracle infrastructure software. Their grid is built using standard, off-the-shelf components, for example, HP Linux servers. First, they migrated a saturated 2.2TB data warehouse to a clustered Oracle Database running in their grid. Their data warehouse runs on eight nodes within their grid. Their cost has been reduced by a factor of 10 while queries now run 52 times faster  Next, they extended their grid to host an SAP Business Information Warehouse. They migrated this system to seven Linux nodes in their grid. This resulted in a tremendous performance improvement — one query went from 83 minutes to 72 seconds. Presently, they are moving their custom electrical market, Siebel and SAP transactional systems to the grid. By 2005, they had 66 Linux servers in their grid, and they expect that number to double.

Gt: What are some of the key factors — or obstacles — organizations should keep in mind when deploying large Grid infrastructures? What examples can you give from the aforementioned deployments?

THOME: There are a few key factors organizations should keep in mind when deploying a grid. Perhaps the biggest is political or cultural. Many organizations are accustomed to controlling their IT assets, and the concept of losing that direct control can be worrisome. 

While there are clear benefits such as access to additional resources for less cost, many business units will still resist the loss of control. They are concerned that the shared resources will not be available to them when they need them. In such cases, the successful organization will have a strong mandate from the top to move to this architecture. 

Also, while it’s possible to build a grid using existing resources from within the enterprise, most enterprises find this isn’t worth the trouble. Confiscating resources from individual departments aggravates the political issues and makes the transitions more difficult. Given these transitions take time to implement, and given the leaps that are made every year in hardware performance and efficiency, it’s often better to buy new (latest and greatest) servers for the grid.

Management of all the servers in a grid also requires some care. For example, Gas Natural found that once they implemented their grid, they had many more servers to manage and monitor. The old methods of system management were no longer effective. Fortunately, vendors such as Oracle have responded with products that are much more adept at managing and monitoring these grids. Oracle Enterprise Manager 10g, for example, allows an administrator to manage all their Oracle Applications, databases, application servers and the hosts that support them. The solution can manage servers as a group, performing a single action (such as patching) against multiple servers, databases and application servers.

Gt: How difficult is it to manage complexity in these large environments? How should organizations plan for this concern?

THOME: As mentioned above, the old ways of managing servers individually do not scale. Administrators need tools to manage and monitor groups of servers. They need automation to eliminate the more mundane tasks.  Administrators also need to start managing services and service level objectives rather than individual components.  Oracle Enterprise Manager 10g, for example, can monitor metrics on the various components and services within the grid and notify the administrator should an exception occur. 

Gt: What about security? How do issues surrounding security change or grow as grids get bigger?

THOME: Grid environments do impose new requirements on security. These requirements are not necessarily related to size, but more to the security policies of the users involved. 

For example, enterprises like Gas Natural keep their servers in their datacenter and behind a firewall. The organizations sharing the resources in the grid are all part of the same enterprise and have some assumed level of trust. This dramatically simplifies the security problem relative to a grid that would span multiple organizations. 

On the other hand, there is a lot of interest in Grid from application service providers (ASP). Many of these providers have greater security concerns and need firewalls between the various components in their grid. They use network switches to build up virtual LANs to electrically isolate servers used by one “customer” from others. They use fibre channel switches to create zones for SAN storage, ensuring files are only available to authorized users. As resources are re-provisioned, care is taken to ensure they are scrubbed clean — no confidential data or malicious code is left behind for the next user.

Gt: Are there security concerns specific to particular types of grids (e.g., desktop grids, datacenter grids, international grids, etc.)?

THOME: Desktop grids are inherently not secure and only used for applications where the data is neither confidential nor replaceable. What drives security is not whether the grid spans one data center, or six international data centers. What drives security is whether the users in the grid — be they in a single data center or multiple data centers — trust each other.

Gt: How do security concerns vary between commercial and research organizations?

THOME: You may think commercial organizations would have more security concerns, but at this stage in grid deployments, most commercial grids are safely within the enterprise. It’s the research and academic users who are trying to build grids that span many users from many organizations — with no single span of control. Many of these grids have therefore had to develop more sophisticated security solutions to not only protect data, but also restrict usage of resources within the grid.

Gt: What about among the various vertical markets within the commercial sector?

THOME: I don't see large variations in security concerns among various vertical markets. However, if you consider the ASP or hosting market as a vertical, you introduce the complexities of multiple users that are not trustworthy sharing resources in a grid.

Gt: Speaking specifically about Oracle's database business, how is database management affected in large Grid environments? What has the company done with 10g in order to maximize simplicity along this front?

THOME: We did a lot in Oracle Database 10g to facilitate management of grids. To begin, we introduced a great deal of self-management features directly into Oracle Database 10g. The easiest thing to manage is the thing that manages itself. 

We also introduced Oracle Enterprise Manager 10g, which can manage many databases, application servers and Oracle Applications and their underlying hosts as a group. In example, run Oracle Enterprise Manager 10g's patch wizard once and it will offer to schedule patching on all databases. Oracle Enterprise Manager 10g has features to facilitate change and configuration management. It can compare configurations and clone configurations to simplify provisioning. And, it has a great deal of management and monitoring features that allow database and system administrators to easily monitor service metrics and receive notifications of service level exceptions.

Gt: Given all the complexity and, to a lesser degree, security issues inherent in Grid implementations, why would an organization want to deploy a grid? What benefits come from these deployments, and how do they outweigh any concerns or obstacles?

THOME: There are three key benefits to Grid computing. First, you can get better information faster. You can bring resources to bear on your business problems as needed. If you have a fixed amount of time, say 24 hours to perform an analysis, you can bring in additional resources to perform better or deeper analysis. And if you have a fixed amount of work, say a report to run, you can bring in additional resources and run it faster. 

Second, you can better align you resources with your business requirements. Enterprises have different business priorities by time of day, day of week, time of month, quarter and year. Also, priorities will shift over time. Sharing resources in a grid makes it easy to move resources from one workload to another, thereby aligning resources and business requirements. 

Lastly, you can save money. You can increase the utilization of your resources by sharing failover and peak capacity across applications, and you can use less expensive components — you can pool multiple smaller inexpensive servers in place of a larger, more powerful, server.

Gt: Is there anything else you'd like to add?

THOME: Flexibility is just as important as ROI and TCO — its benefits are just a bit harder to quantify. It gives customers the ability to cope with increasingly unpredictable workloads. This enables their business to quickly adapt to changes and prevents problems that arise when the business cannot adapt. However, this can be tricky to put a number on.

Grids provide many benefits today. You get better information faster because you can bring additional resources to bear on a problem to perform better analysis in a shorter period of time. You can better and more quickly align your resources with your business priorities, and save money by increasing utilization (i.e., share peak and failover capacity across applications) and by using smaller less expensive servers (that are then virtualized by the Grid layer to behave as a larger more expensive server). Although Grid technologies are under development, these benefits can be realized today.

One last thing about Grid is that it's easily adopted in an incremental manner. Customers can start small, and then grow their grid as they become more comfortable. You don't have to move everything to the grid all at once — grids and traditional architectures can coexist.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Mira Supercomputer Enables Cancer Research Breakthrough

November 11, 2019

Dynamic partial-wave spectroscopic (PWS) microscopy allows researchers to observe intracellular structures as small as 20 nanometers – smaller than those visible by optical microscopes – in three dimensions at a mill Read more…

By Staff report

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quantum annealing) – ion trap technology is edging into the QC Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. That’s the conclusion drawn by the scientists and researcher Read more…

By Jan Rowell

What’s New in HPC Research: Cosmic Magnetism, Cryptanalysis, Car Navigation & More

November 8, 2019

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Machine Learning Fuels a Booming HPC Market

November 7, 2019

Enterprise infrastructure investments for training machine learning models have grown more than 50 percent annually over the past two years, and are expected to shortly surpass $10 billion, according to a new market fore Read more…

By George Leopold

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Atom by Atom, Supercomputers Shed Light on Alloys

November 7, 2019

Alloys are at the heart of human civilization, but developing alloys in the Information Age is much different than it was in the Bronze Age. Trial-by-error smelting has given way to the use of high-performance computing Read more…

By Oliver Peckham

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quant Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. Th Read more…

By Jan Rowell

MLPerf Releases First Inference Benchmark Results; Nvidia Touts its Showing

November 6, 2019

MLPerf.org, the young AI-benchmarking consortium, today issued the first round of results for its inference test suite. Among organizations with submissions wer Read more…

By John Russell

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed ins Read more…

By Tiffany Trader

Nvidia Launches Credit Card-Sized 21 TOPS Jetson System for Edge Devices

November 6, 2019

Nvidia has launched a new addition to its Jetson product line: a credit card-sized (70x45mm) form factor delivering up to 21 trillion operations/second (TOPS) o Read more…

By Doug Black

In Memoriam: Steve Tuecke, Globus Co-founder

November 4, 2019

HPCwire is deeply saddened to report that Steve Tuecke, longtime scientist at Argonne National Lab and University of Chicago, has passed away at age 52. Tuecke Read more…

By Tiffany Trader

Spending Spree: Hyperscalers Bought $57B of IT in 2018, $10B+ by Google – But Is Cloud on Horizon?

October 31, 2019

Hyperscalers are the masters of the IT universe, gravitational centers of increasing pull in the emerging age of data-driven compute and AI.  In the high-stake Read more…

By Doug Black

Cray Debuts ClusterStor E1000 Finishing Remake of Portfolio for ‘Exascale Era’

October 30, 2019

Cray, now owned by HPE, today introduced the ClusterStor E1000 storage platform, which leverages Cray software and mixes hard disk drives (HDD) and flash memory Read more…

By John Russell

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated mo Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This