Oracle’s XTP Sage Shares his Wisdom

By Nicole Hemsoth

April 7, 2008

In this interview, Cameron Purdy, Oracle’s vice president of Fusion Middleware development, discusses how customer demands for extreme transaction processing are evolving, as well as how use cases for Coherence (the data grid solution Purdy developed as founder of Tangosol) have evolved since becoming part of the Oracle software family.

— 

GRIDtoday: What industries are using Coherence, and what industries are taking advantage of XTP, in general?

CAMERON PURDY: A few key industries are probably our No. 1 driving factor both in terms of technology and, in a lot of cases, the revenue associated with the product. By a statistically significant margin, the No. 1 is still financial services. Financial services, from an XTP point of view, drove our initial move into the grid environment and continues to drive a substantial portion, and the extreme portion, of the XTP vision in terms of the features on which we focus. Those features turn out to be universally appropriate in every industry we’re working in, but because of the competitive nature of [the financial services] industry, coupled with the grid brains that have been vacuumed into that industry for their ability to solve some of these problems, financial services has remained the biggest adopter of XTP to date. But it’s certainly not the only one.

Online systems are another huge driver. So online retailers, online travel systems and online gaming companies — the other side of the financial market, the ones who bet on horses instead of stocks — traditionally have been and continue to be a pretty big chunk of market that we serve.

We continue to work with a growing list of logistics companies, and we’ve been very successful in that market. One of our customers in that market mentioned to us that they are an IT organization that happens to ship packages. Commensurate with being in that market there is a huge volume of information — rapidly changing information — and lots of ways to make small optimizations have a big impact on the bottom line. That certainly is one of the use cases for which our product has been very popular — the ability to draw significant conclusions from massive heaps of information. We’ve been very successfully adopted by telcos, as well, and utilities for the same reasons: large amounts of information, requirements for uptime, systemic availability.

I’d say over the last year, and probably as a side effect of being part of the Oracle organization, a lot of the adoption we’re seeing has been associated with large companies and organizations and their uptake of service-oriented architectures. As that first wave of SOA has taken hold of the industry, it has certainly propelled Coherence in terms of the requirement for creating systemic availability, for creating continuous availability of systems, and being able to scale out critical services.

When you think of infrastructure, you don’t necessarily think of SOA as being a driving factor behind extreme transaction processing, but if you think about it, if SOA actually takes off within an organization, you’re going to have a relatively small number of services that end up far exceeding their original dedicated footprints. When something is popular in an operating world, the same Slashdot effect that we saw on the Web 10 years ago applies to services in a service-oriented architecture. As you start to expose valuable information within an organization, as soon as becomes available, everyone with an Excel spreadsheet, anyone with Javascript capabilities, let alone your IT organization using .NET, Java or anything back to COBOL, now has the ability to grab that information from you and submit transactions to you. The end result is SOA generates a requirement for additional availability, but it also generates incredible hotspots in terms of infrastructure — the requirement to be able to scale out systems to the levels that we describe when we talk about XTP.

Gt: How are you seeing requirements evolve within these industries?

PURDY: I think what we saw happening a year or two ago has definitely broadened, in terms of adoption, as well as deepened, in terms of requirements. A lot of these systems are mission-critical systems. They’re not science fair projects or academic projects; these are systems driving core infrastructure, particularly in financial institutions, in terms of being able to shrink overnight windows on risk calculations and things like that. We see a shift from what we used to think of as “exotic” use for these types of systems to — at least in those types of environments — mainstream use. I think a lot of it certainly has been driven by better understanding of the technology. Obviously, Gartner and Forrester and other analyst groups have been pretty instrumental in popularizing some of the notions that they witnessed in customer accounts of ours. Thus, these are things that have gone from being exotic to being pretty much mainstream — at least within markets like telcos, financial institutions and high-scale e-commerce systems.

Gt: Are there any industries in particular where you’ve seen a dramatic change in either demand for or use of these types of systems?

PURDY: Certainly, online companies are going to be a poster child for it. In particular, that has to do with the fact that their growth is capable of exceeding any expectation. In other words, even if they don’t require the level of scalability that we can provide, they’re not sure if they require it. So, quite often, we’re seeing architectures adopting Coherence very early on, making sure they have XTP built into their core as a means of insulating them from cost surprise down the line.

When you can achieve, for vast portions of your system, linear scalability on commodity hardware — we’re not talking about million-dollar machines, we’re talking about $2,000 to $5,000 machines and being able to scale out into the thousands of these — you end up having cost predictability. You’re not surprised when your system gets so loaded down that you have to buy more hardware because you know how much hardware it’s actually going to take to increase your throughput. Basically, it gives you assurance that you’re going to be able to meet your SLAs or meet the perceived requirements of your users at any level of scale. It gives you a type of insurance that is priceless for a CIO or an architect of a system, as it gives them the ability early on in the project to address issues that would otherwise be crippling, if not, from the point of view of a start-up, life-ending.

We’re seeing grid technology adopted much more early-stage now than early on, when many of these systems were only coming to us when they hit the wall. We certainly gained a reputation for helping companies that had hit the wall, but it’s much more gratifying to see companies adopting it as a core part of what they’re doing.

Gt: If Coherence handles the data aspect of an XTP environment, what are your customers doing to address the compute aspect?

PURDY: I think companies already have the compute requirement. We’re talking about companies using DataSynapse or Platform, for example, or, more recently, using some of the open-source offerings such as GridGain. These are companies that, by and large, have traditional compute-intensive loads that they already deployed on those environments. So we’re not seeing so much these companies moving to compute after they have data grid; it’s more that these compute loads they have are data-intensive, and for a number of those types of applications, the ability to scale out a compute grid without having a data grid stitched into it just can’t happen. They’ll get to a point where it doesn’t matter how many compute nodes they add, they’re not going to get anything done because the information is bottlenecked.

This is a lot of what drove us into the grid space to begin with. In some of earliest grid projects at some of the banks here in the states, they were already doing the compute side, but they needed the data to keep up with the compute. The data grid is a very natural fit with high-scale compute infrastructures, and we continue to invest heavily in that area. Our customers see that infrastructure not as a compute infrastructure or as a data infrastructure, but as a utility. It’s an investment that they’ve made that allows them to deploy large-scale applications — applications that at certain times of day might consume hundreds of thousands of CPUs in parallel, and at other times might not need anything — out into a utility environment.

Gt: Overall, how has customer use of Coherence evolved over the past six months to a year?

PURDY: One of things about going from being fairly exotic to being sort of mainstream is that a lot of the considerations our customers had when we were working with them a few years ago are not the same consideration they have today. Our customers have certainly focused much more on the manageability and the monitoring, what we refer to internally as “Project iPod” — this idea that just because you’re controlling 10,000 CPUs, it doesn’t mean it has to be as complex as configuring 10,000 servers. For many of these applications, it should be as simple as pressing the “play” button. If it needs 10,000 CPUs to do that, it should be able to allocate, deploy, start up, configure, etc., everything it needs to do across extreme-scale environments. And just as easily, when it’s done with what it has to do, if it’s no longer needed, it should be able to fold itself back up and put itself away.

Our customers are no longer 100 percent rocket scientists, they’re not all eating and drinking the technology at that level anymore, so our software continues to evolve to be more and more IT-friendly. It focuses on best practices and documentation, and on what years ago in the mainframe parlance we referred to as “serviceability.” That is, the ability to have your software not only be configured and rolled out, but actually be morphable as it runs, to be able to be serviced, upgraded and hot deployed. All of these are critical to our customers.

In addition, because we’re part of Oracle, the integration with the database has been more and more a desired outcome for many of our customers, so that obviously is an area in which we’ve significantly increased our investment. Also, as I mentioned earlier, I think the investment Oracle has made in service-oriented architecture has influenced a lot of the use cases we’re seeing. From the big shift point of view, what we’re seeing is: (1) a shift of more business users to the grid; (2) more integration required across the database and into the data grid; and (3) the integration with service-oriented technology.

Gt: How has being part of Oracle, and Oracle Fusion middleware specifically, affected what your customers expect?

PURDY: Fortunately, we’ve been able to keep the entire customer base through the transaction and through the subsequent time period, so we’ve been able to keep those communications open with our customers. We’ve obviously significantly expanded the customer base by being part of a very large organization, and we continue to exceed all expectations on goals that were set forth there.

The end result though is that the requests we get from customers continue to increase as a result of this broadening of the customers we’re working with. The benefits to our organization are a much clearer picture of what the market is driving toward and, ultimately, what the needs are of the companies that are adopting this technology. As much as out customers look to us as visionaries, I think the flip of that is also true: the technology we create is in direct correlation to the needs our customers invest in us, in terms of what they share with us about the problems they’re attempting to solve. It’s a trust relationship, but it works both directions.

Also, being part of Oracle, we’ve been able to scale our organization in terms of sales, development, marketing, and the level and quality of service that we are able to provide to our customers. From all those vantage points, it’s been an overwhelming success.

Gt: You noted that you’re being asked more often to integrate Coherence into existing Oracle database environments? How do you handle this?

PURDY: Traditionally, we’ve had a number of ways to integrate with the database, including asynchronously through “write-behind” technology, which I think is a pretty big game-changer from an XTP point of view. The ability to do reliable write-behind, which I think we introduced six years ago, is one of the technologies that propelled us well beyond the noise in the market. Also, in terms of read and write coalescing and things like that, we’ve been able to dramatically increase the effectiveness of Oracle databases in large-scale compute grid environments and extreme-scale e-commerce systems.

Additionally, Oracle has invested in and published materials on how the database can provide information in real time out to, in terms of event feeds, for example, clients in the database. We’re working hand-in-hand with that organization to be able to take advantage of that. We’re not talking about any top secret, backdoor, hidden stuff. These are all published interfaces and, as much as possible, standards-based approaches to integrations with the Oracle database. From our point of view, it’s a good investment for us, and it’s very cost-effective for our customers, as well.

Gt: If you could narrow it down, what is the ideal use case for Coherence, or for any data grid? What’s the perfect job?

PURDY: There are so many examples of a perfect fit that it’s hard to boil it down to one example. The answer I’ve given to customers when they ask that type of question is “How many applications do you have that you want to have available all the time? How many applications do you want to be able to scale up? How many applications do you have where performance matters? In how many of these applications would you like to have real-time information, information consistency and information reliability?”

What Coherence provides out of the box is not a solution to one of these problems, it’s not a “here’s how you make your app faster” or “here’s how you make your app scale.” Anyone can make an application faster, anyone can build a system that’s 99.999 percent available. We have known solutions for any of these problems in isolation. What’s difficult isn’t solving for one of them, it’s solving for all of those variables at the same time.

What Coherence provides to our customers is a solution to availability, to reliability of information, to linear scalability and to predictable high performance, and it solves those simultaneously. It provides trusted infrastructure for building that next generation of grid-enabled out-of-the-box applications. This has been our differentiator in the market, this trusted status of truly being able to provide information reliability within large-scale, mission-critical environments, and it certainly is one of the reasons this acquisition has worked so well. We have fundamentally invested in those core tenets of these systems, and our customers understand and respect that. 

 
Gt: How do you view the future of the data grid market, specifically around what business needs are going to be driving the next round of technological advancements?

PURDY: One of the greatest things about our industry is the insatiable desire for “more, better, faster.” When you look at the increase in data volumes — whether you’re looking at the doubling of financial feed information every six to nine months or you’re looking at the numbers provided by storage vendors in terms of how much information their customers are managing on an ongoing basis — there seems to be no end to demand for the ability to manage, analyze, produce and calculate. It’s fair to say that our customers astound us with their appetites for being able to [manage] just huge systems.

What we see overall is a move toward consolidating many of the capabilities that we associate today with grid, virtualization and service-oriented architectures, as well as the manageability of all of those, and turning these point solutions to problems of utility computing, capacity on demand, SLA management and dynamic infrastructure. I think our goal as an industry has been to move from having grid, for example, be considered an exotic concept, to having it considered almost a de facto standard for how all applications should be built and deployed. Because we’re not talking about exotic concepts, we’re talking about things everybody wants. When you the last time you talked to a customer that didn’t care about scalability or didn’t care about systemic availability? These are things that all of our customers are faced with as challenges, and being able to provide a systemic solution to those challenges is, from a customer point of view, just a natural evolution of what our industry has been providing.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Supercomputers Streamline Prediction of Dangerous Arrhythmia

June 2, 2020

Heart arrhythmia can prove deadly, contributing to the hundreds of thousands of deaths from cardiac arrest in the U.S. every year. Unfortunately, many of those arrhythmia are induced as side effects from various medicati Read more…

By Staff report

Indiana University to Deploy Jetstream 2 Cloud with AMD, Nvidia Technology

June 2, 2020

Indiana University has been awarded a $10 million NSF grant to build ‘Jetstream 2,’ a cloud computing system that will provide 8 aggregate petaflops of computing capability in support of data analysis and AI workload Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been instrumental to AMD’s datacenter market resurgence. Nanomet Read more…

By Doug Black

Supercomputer-Powered Protein Simulations Approach Lab Accuracy

June 1, 2020

Protein simulations have dominated the supercomputing conversation of late as supercomputers around the world race to simulate the viral proteins of COVID-19 as accurately as possible and simulate potential bindings in t Read more…

By Oliver Peckham

HPC Career Notes: June 2020 Edition

June 1, 2020

In this monthly feature, we'll keep you up-to-date on the latest career developments for individuals in the high-performance computing community. Whether it's a promotion, new company hire, or even an accolade, we've got Read more…

By Mariana Iriarte

AWS Solution Channel

Computational Fluid Dynamics on AWS

Over the past 30 years Computational Fluid Dynamics (CFD) has grown to become a key part of many engineering design processes. From aircraft design to modelling the blood flow in our bodies, the ability to understand the behaviour of fluids has enabled countless innovations and improved the time to market for many products. Read more…

Supercomputer Modeling Shows How COVID-19 Spreads Through Populations

May 30, 2020

As many states begin to loosen the lockdowns and stay-at-home orders that have forced most Americans inside for the past two months, researchers are poring over the data, looking for signs of the dreaded second peak of t Read more…

By Oliver Peckham

Indiana University to Deploy Jetstream 2 Cloud with AMD, Nvidia Technology

June 2, 2020

Indiana University has been awarded a $10 million NSF grant to build ‘Jetstream 2,’ a cloud computing system that will provide 8 aggregate petaflops of comp Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

COVID-19 HPC Consortium Expands to Europe, Reports on Research Projects

May 28, 2020

The COVID-19 HPC Consortium, a public-private effort delivering free access to HPC processing for scientists pursuing coronavirus research – some utilizing AI Read more…

By Doug Black

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

IBM Boosts Deep Learning Accuracy on Memristive Chips

May 27, 2020

IBM researchers have taken another step towards making in-memory computing based on phase change (PCM) memory devices a reality. Papers in Nature and Frontiers Read more…

By John Russell

Hats Over Hearts: Remembering Rich Brueckner

May 26, 2020

HPCwire and all of the Tabor Communications family are saddened by last week’s passing of Rich Brueckner. He was the ever-optimistic man in the Red Hat presiding over the InsideHPC media portfolio for the past decade and a constant presence at HPC’s most important events. Read more…

Nvidia Q1 Earnings Top Expectations, Datacenter Revenue Breaks $1B

May 22, 2020

Nvidia’s seemingly endless roll continued in the first quarter with the company announcing blockbuster earnings that exceeded Wall Street expectations. Nvidia Read more…

By Doug Black

Microsoft’s Massive AI Supercomputer on Azure: 285k CPU Cores, 10k GPUs

May 20, 2020

Microsoft has unveiled a supercomputing monster – among the world’s five most powerful, according to the company – aimed at what is known in scientific an Read more…

By Doug Black

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Contributors

Tech Conferences Are Being Canceled Due to Coronavirus

March 3, 2020

Several conferences scheduled to take place in the coming weeks, including Nvidia’s GPU Technology Conference (GTC) and the Strata Data + AI conference, have Read more…

By Alex Woodie

Exascale Watch: El Capitan Will Use AMD CPUs & GPUs to Reach 2 Exaflops

March 4, 2020

HPE and its collaborators reported today that El Capitan, the forthcoming exascale supercomputer to be sited at Lawrence Livermore National Laboratory and serve Read more…

By John Russell

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This