Oracle’s XTP Sage Shares his Wisdom

By Nicole Hemsoth

April 7, 2008

In this interview, Cameron Purdy, Oracle’s vice president of Fusion Middleware development, discusses how customer demands for extreme transaction processing are evolving, as well as how use cases for Coherence (the data grid solution Purdy developed as founder of Tangosol) have evolved since becoming part of the Oracle software family.

— 

GRIDtoday: What industries are using Coherence, and what industries are taking advantage of XTP, in general?

CAMERON PURDY: A few key industries are probably our No. 1 driving factor both in terms of technology and, in a lot of cases, the revenue associated with the product. By a statistically significant margin, the No. 1 is still financial services. Financial services, from an XTP point of view, drove our initial move into the grid environment and continues to drive a substantial portion, and the extreme portion, of the XTP vision in terms of the features on which we focus. Those features turn out to be universally appropriate in every industry we’re working in, but because of the competitive nature of [the financial services] industry, coupled with the grid brains that have been vacuumed into that industry for their ability to solve some of these problems, financial services has remained the biggest adopter of XTP to date. But it’s certainly not the only one.

Online systems are another huge driver. So online retailers, online travel systems and online gaming companies — the other side of the financial market, the ones who bet on horses instead of stocks — traditionally have been and continue to be a pretty big chunk of market that we serve.

We continue to work with a growing list of logistics companies, and we’ve been very successful in that market. One of our customers in that market mentioned to us that they are an IT organization that happens to ship packages. Commensurate with being in that market there is a huge volume of information — rapidly changing information — and lots of ways to make small optimizations have a big impact on the bottom line. That certainly is one of the use cases for which our product has been very popular — the ability to draw significant conclusions from massive heaps of information. We’ve been very successfully adopted by telcos, as well, and utilities for the same reasons: large amounts of information, requirements for uptime, systemic availability.

I’d say over the last year, and probably as a side effect of being part of the Oracle organization, a lot of the adoption we’re seeing has been associated with large companies and organizations and their uptake of service-oriented architectures. As that first wave of SOA has taken hold of the industry, it has certainly propelled Coherence in terms of the requirement for creating systemic availability, for creating continuous availability of systems, and being able to scale out critical services.

When you think of infrastructure, you don’t necessarily think of SOA as being a driving factor behind extreme transaction processing, but if you think about it, if SOA actually takes off within an organization, you’re going to have a relatively small number of services that end up far exceeding their original dedicated footprints. When something is popular in an operating world, the same Slashdot effect that we saw on the Web 10 years ago applies to services in a service-oriented architecture. As you start to expose valuable information within an organization, as soon as becomes available, everyone with an Excel spreadsheet, anyone with Javascript capabilities, let alone your IT organization using .NET, Java or anything back to COBOL, now has the ability to grab that information from you and submit transactions to you. The end result is SOA generates a requirement for additional availability, but it also generates incredible hotspots in terms of infrastructure — the requirement to be able to scale out systems to the levels that we describe when we talk about XTP.

Gt: How are you seeing requirements evolve within these industries?

PURDY: I think what we saw happening a year or two ago has definitely broadened, in terms of adoption, as well as deepened, in terms of requirements. A lot of these systems are mission-critical systems. They’re not science fair projects or academic projects; these are systems driving core infrastructure, particularly in financial institutions, in terms of being able to shrink overnight windows on risk calculations and things like that. We see a shift from what we used to think of as “exotic” use for these types of systems to — at least in those types of environments — mainstream use. I think a lot of it certainly has been driven by better understanding of the technology. Obviously, Gartner and Forrester and other analyst groups have been pretty instrumental in popularizing some of the notions that they witnessed in customer accounts of ours. Thus, these are things that have gone from being exotic to being pretty much mainstream — at least within markets like telcos, financial institutions and high-scale e-commerce systems.

Gt: Are there any industries in particular where you’ve seen a dramatic change in either demand for or use of these types of systems?

PURDY: Certainly, online companies are going to be a poster child for it. In particular, that has to do with the fact that their growth is capable of exceeding any expectation. In other words, even if they don’t require the level of scalability that we can provide, they’re not sure if they require it. So, quite often, we’re seeing architectures adopting Coherence very early on, making sure they have XTP built into their core as a means of insulating them from cost surprise down the line.

When you can achieve, for vast portions of your system, linear scalability on commodity hardware — we’re not talking about million-dollar machines, we’re talking about $2,000 to $5,000 machines and being able to scale out into the thousands of these — you end up having cost predictability. You’re not surprised when your system gets so loaded down that you have to buy more hardware because you know how much hardware it’s actually going to take to increase your throughput. Basically, it gives you assurance that you’re going to be able to meet your SLAs or meet the perceived requirements of your users at any level of scale. It gives you a type of insurance that is priceless for a CIO or an architect of a system, as it gives them the ability early on in the project to address issues that would otherwise be crippling, if not, from the point of view of a start-up, life-ending.

We’re seeing grid technology adopted much more early-stage now than early on, when many of these systems were only coming to us when they hit the wall. We certainly gained a reputation for helping companies that had hit the wall, but it’s much more gratifying to see companies adopting it as a core part of what they’re doing.

Gt: If Coherence handles the data aspect of an XTP environment, what are your customers doing to address the compute aspect?

PURDY: I think companies already have the compute requirement. We’re talking about companies using DataSynapse or Platform, for example, or, more recently, using some of the open-source offerings such as GridGain. These are companies that, by and large, have traditional compute-intensive loads that they already deployed on those environments. So we’re not seeing so much these companies moving to compute after they have data grid; it’s more that these compute loads they have are data-intensive, and for a number of those types of applications, the ability to scale out a compute grid without having a data grid stitched into it just can’t happen. They’ll get to a point where it doesn’t matter how many compute nodes they add, they’re not going to get anything done because the information is bottlenecked.

This is a lot of what drove us into the grid space to begin with. In some of earliest grid projects at some of the banks here in the states, they were already doing the compute side, but they needed the data to keep up with the compute. The data grid is a very natural fit with high-scale compute infrastructures, and we continue to invest heavily in that area. Our customers see that infrastructure not as a compute infrastructure or as a data infrastructure, but as a utility. It’s an investment that they’ve made that allows them to deploy large-scale applications — applications that at certain times of day might consume hundreds of thousands of CPUs in parallel, and at other times might not need anything — out into a utility environment.

Gt: Overall, how has customer use of Coherence evolved over the past six months to a year?

PURDY: One of things about going from being fairly exotic to being sort of mainstream is that a lot of the considerations our customers had when we were working with them a few years ago are not the same consideration they have today. Our customers have certainly focused much more on the manageability and the monitoring, what we refer to internally as “Project iPod” — this idea that just because you’re controlling 10,000 CPUs, it doesn’t mean it has to be as complex as configuring 10,000 servers. For many of these applications, it should be as simple as pressing the “play” button. If it needs 10,000 CPUs to do that, it should be able to allocate, deploy, start up, configure, etc., everything it needs to do across extreme-scale environments. And just as easily, when it’s done with what it has to do, if it’s no longer needed, it should be able to fold itself back up and put itself away.

Our customers are no longer 100 percent rocket scientists, they’re not all eating and drinking the technology at that level anymore, so our software continues to evolve to be more and more IT-friendly. It focuses on best practices and documentation, and on what years ago in the mainframe parlance we referred to as “serviceability.” That is, the ability to have your software not only be configured and rolled out, but actually be morphable as it runs, to be able to be serviced, upgraded and hot deployed. All of these are critical to our customers.

In addition, because we’re part of Oracle, the integration with the database has been more and more a desired outcome for many of our customers, so that obviously is an area in which we’ve significantly increased our investment. Also, as I mentioned earlier, I think the investment Oracle has made in service-oriented architecture has influenced a lot of the use cases we’re seeing. From the big shift point of view, what we’re seeing is: (1) a shift of more business users to the grid; (2) more integration required across the database and into the data grid; and (3) the integration with service-oriented technology.

Gt: How has being part of Oracle, and Oracle Fusion middleware specifically, affected what your customers expect?

PURDY: Fortunately, we’ve been able to keep the entire customer base through the transaction and through the subsequent time period, so we’ve been able to keep those communications open with our customers. We’ve obviously significantly expanded the customer base by being part of a very large organization, and we continue to exceed all expectations on goals that were set forth there.

The end result though is that the requests we get from customers continue to increase as a result of this broadening of the customers we’re working with. The benefits to our organization are a much clearer picture of what the market is driving toward and, ultimately, what the needs are of the companies that are adopting this technology. As much as out customers look to us as visionaries, I think the flip of that is also true: the technology we create is in direct correlation to the needs our customers invest in us, in terms of what they share with us about the problems they’re attempting to solve. It’s a trust relationship, but it works both directions.

Also, being part of Oracle, we’ve been able to scale our organization in terms of sales, development, marketing, and the level and quality of service that we are able to provide to our customers. From all those vantage points, it’s been an overwhelming success.

Gt: You noted that you’re being asked more often to integrate Coherence into existing Oracle database environments? How do you handle this?

PURDY: Traditionally, we’ve had a number of ways to integrate with the database, including asynchronously through “write-behind” technology, which I think is a pretty big game-changer from an XTP point of view. The ability to do reliable write-behind, which I think we introduced six years ago, is one of the technologies that propelled us well beyond the noise in the market. Also, in terms of read and write coalescing and things like that, we’ve been able to dramatically increase the effectiveness of Oracle databases in large-scale compute grid environments and extreme-scale e-commerce systems.

Additionally, Oracle has invested in and published materials on how the database can provide information in real time out to, in terms of event feeds, for example, clients in the database. We’re working hand-in-hand with that organization to be able to take advantage of that. We’re not talking about any top secret, backdoor, hidden stuff. These are all published interfaces and, as much as possible, standards-based approaches to integrations with the Oracle database. From our point of view, it’s a good investment for us, and it’s very cost-effective for our customers, as well.

Gt: If you could narrow it down, what is the ideal use case for Coherence, or for any data grid? What’s the perfect job?

PURDY: There are so many examples of a perfect fit that it’s hard to boil it down to one example. The answer I’ve given to customers when they ask that type of question is “How many applications do you have that you want to have available all the time? How many applications do you want to be able to scale up? How many applications do you have where performance matters? In how many of these applications would you like to have real-time information, information consistency and information reliability?”

What Coherence provides out of the box is not a solution to one of these problems, it’s not a “here’s how you make your app faster” or “here’s how you make your app scale.” Anyone can make an application faster, anyone can build a system that’s 99.999 percent available. We have known solutions for any of these problems in isolation. What’s difficult isn’t solving for one of them, it’s solving for all of those variables at the same time.

What Coherence provides to our customers is a solution to availability, to reliability of information, to linear scalability and to predictable high performance, and it solves those simultaneously. It provides trusted infrastructure for building that next generation of grid-enabled out-of-the-box applications. This has been our differentiator in the market, this trusted status of truly being able to provide information reliability within large-scale, mission-critical environments, and it certainly is one of the reasons this acquisition has worked so well. We have fundamentally invested in those core tenets of these systems, and our customers understand and respect that. 

 
Gt: How do you view the future of the data grid market, specifically around what business needs are going to be driving the next round of technological advancements?

PURDY: One of the greatest things about our industry is the insatiable desire for “more, better, faster.” When you look at the increase in data volumes — whether you’re looking at the doubling of financial feed information every six to nine months or you’re looking at the numbers provided by storage vendors in terms of how much information their customers are managing on an ongoing basis — there seems to be no end to demand for the ability to manage, analyze, produce and calculate. It’s fair to say that our customers astound us with their appetites for being able to [manage] just huge systems.

What we see overall is a move toward consolidating many of the capabilities that we associate today with grid, virtualization and service-oriented architectures, as well as the manageability of all of those, and turning these point solutions to problems of utility computing, capacity on demand, SLA management and dynamic infrastructure. I think our goal as an industry has been to move from having grid, for example, be considered an exotic concept, to having it considered almost a de facto standard for how all applications should be built and deployed. Because we’re not talking about exotic concepts, we’re talking about things everybody wants. When you the last time you talked to a customer that didn’t care about scalability or didn’t care about systemic availability? These are things that all of our customers are faced with as challenges, and being able to provide a systemic solution to those challenges is, from a customer point of view, just a natural evolution of what our industry has been providing.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

SIA Recognizes Robert Dennard with 2019 Noyce Award

November 12, 2019

If you don’t know what Dennard Scaling is, the chances are strong you don’t labor in electronics. Robert Dennard, longtime IBM researcher, inventor of the DRAM and the fellow for whom Dennard Scaling was named, is th Read more…

By John Russell

Leveraging Exaflops Performance to Remediate Nuclear Waste

November 12, 2019

Nuclear waste storage sites are a subject of intense controversy and debate; nobody wants the radioactive remnants in their backyard. Now, a collaboration between Berkeley Lab, Pacific Northwest National University (PNNL Read more…

By Oliver Peckham

Using HPC and Machine Learning to Predict Traffic Congestion

November 12, 2019

Traffic congestion is a never-ending logic puzzle, dictated by commute patterns, but also by more stochastic accidents and similar disruptions. Traffic engineers struggle to model the traffic flow that occurs after accid Read more…

By Oliver Peckham

Mira Supercomputer Enables Cancer Research Breakthrough

November 11, 2019

Dynamic partial-wave spectroscopic (PWS) microscopy allows researchers to observe intracellular structures as small as 20 nanometers – smaller than those visible by optical microscopes – in three dimensions at a mill Read more…

By Staff report

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quantum annealing) – ion trap technology is edging into the QC Read more…

By John Russell

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. That’s the conclusion drawn by the scientists and researcher Read more…

By Jan Rowell

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quant Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. Th Read more…

By Jan Rowell

MLPerf Releases First Inference Benchmark Results; Nvidia Touts its Showing

November 6, 2019

MLPerf.org, the young AI-benchmarking consortium, today issued the first round of results for its inference test suite. Among organizations with submissions wer Read more…

By John Russell

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed ins Read more…

By Tiffany Trader

Nvidia Launches Credit Card-Sized 21 TOPS Jetson System for Edge Devices

November 6, 2019

Nvidia has launched a new addition to its Jetson product line: a credit card-sized (70x45mm) form factor delivering up to 21 trillion operations/second (TOPS) o Read more…

By Doug Black

In Memoriam: Steve Tuecke, Globus Co-founder

November 4, 2019

HPCwire is deeply saddened to report that Steve Tuecke, longtime scientist at Argonne National Lab and University of Chicago, has passed away at age 52. Tuecke Read more…

By Tiffany Trader

Spending Spree: Hyperscalers Bought $57B of IT in 2018, $10B+ by Google – But Is Cloud on Horizon?

October 31, 2019

Hyperscalers are the masters of the IT universe, gravitational centers of increasing pull in the emerging age of data-driven compute and AI.  In the high-stake Read more…

By Doug Black

Cray Debuts ClusterStor E1000 Finishing Remake of Portfolio for ‘Exascale Era’

October 30, 2019

Cray, now owned by HPE, today introduced the ClusterStor E1000 storage platform, which leverages Cray software and mixes hard disk drives (HDD) and flash memory Read more…

By John Russell

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated mo Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This