IBM’s Take on Grid, Virtualization and SOA

By By Derrick Harris, Editor

April 3, 2006

In this Q&A with GRIDtoday, IBM's program director for Grid computing strategy and technology, Matt Haynos, previews his upcoming presentation at LinuxWorld's “Enterprise Grid Solution Showcase” and discusses the synergies among Grid computing, virtualization and SOA. At IBM, he says, “Grid IS virtualization.”



GRIDtoday:
Can you give me a preview of what you'll be discussing at LinuxWorld? About what can attendees expect to learn?

MATT HAYNOS: I'll be outlining the relationship and synergies between virtualization, Grid and service-oriented architectures. There's been a lot of visibility and momentum around each of these and what I hope to do is to describe, in a very simple manner, how Grid and virtualization as infrastructure capabilities can complement and accelerate companies SOA plans. Grid/virtualization and SOA are very synergistic thoughts and what I want to articulate is why infrastructures based on virtualization and grids are strong foundations for SOAs.

Gt: Can you give brief definitions of Grid, virtualization and SOA?

HAYNOS: I'm not sure I'm game for defining “Grid”, but here goes! At IBM, we try to keep it pretty simple: Grid IS virtualization. In particular, extending the virtualization thought to two important areas: workload and information virtualization. There are multiple layers to virtualization, starting from the microprocessor and including server, storage and network virtualization. Grid is a logical extension of virtualization to encompass both workload and information across a distributed infrastructure.

At a high level, workload virtualization is about separating services and applications from the underlying infrastructure; abstracting the concept of workload execution so that, from the standpoint of end-users or submitters, the overall Grid system appears as a single set of capabilities.

Information virtualization is a similar concept, but for data. If you start moving application and service execution around dynamically or start distributing it more widely, you really need the information (data) that the application requires in the proper format and with “near-local” performance to achieve the kind of improvements in “time to results” or resource optimization that is the goal. If you don't have that information readily accessible, you can expend a lot of energy scheduling and managing the execution and still get no advantage because the application is “bottlenecked” waiting for access to the data it needs.

Finally, SOA is an architectural style that supports service orientation, which is a way of integrating your business as linked services and the outcomes they bring. A service is simply a repeatable business task. For example, checking a customer's credit, or opening a new account.

Gt: How can these technologies be utilized in the same infrastructure?

HAYNOS: Companies are adopting and utilizing SOA as a way to align and architect their application architecture to support business processes. The challenge today is to quickly assemble resources to respond to new business opportunities and endeavors. The companies that can do this — and are nimble and fleet-of-foot — realize incredible competitive advantage. Innovation in terms of process is just as important, or maybe even more important, as technology innovation. Witness what we are seeing with “mash up” applications. The software development and deployment lifecycle is days and weeks now, not months — all facilitated by the “services” notion.

Recently, BusinessWeek's cover story asked “Is Your Business Fast Enough?” and stated “speed to market is now the ultimate competitive weapon.” IBM's Global Innovation Outlook 2.0 declared that “we're witnessing the rise of a new breed of very small and highly specialized businesses that are not only competing globally, but in some cases seriously disrupting existing business models and paradigms” and it noted a company — Apex Digital — that actually generated $1 billion in revenues in 2002 with fewer than 100 employees.

So, it starts with business processes, tasks and endeavors, and the SOA approach is an architectural approach around this concept. It's really a business thought and how people, processes and information can be integrated in a seamless and coordinated way. So architecture, the relationship between services and composite applications, and how these facilitate business processes and new business opportunities — both within an enterprise and across the extended network — are vital.

Now, the question to ask is do I have an underlying infrastructure to support this agility? Once I've decomposed my processes into services and tasks accordingly, what kind of IT infrastructure do I need? How do I manage it? And that's where virtualization and Grid come in. Services are dynamic. They move around, they can be fleeting and they need to be started and stopped on demand. Putting all of this logic and capability into Grid middleware and letting it place and move services for execution makes all the sense in the world. When new resources come online, the Grid middleware recognizes them and can deploy new services and composite applications to them. If SOA is about separating applications from services, Grid is about separating services and applications from the underlying infrastructure. It's one of the reasons why containers and execution environments are so important, as are technologies like virtual machines. We've seen incredible adoption of WebSphere Extended Deployment in support of SOAs.

Gt: Would it be accurate to describe virtualization and SOA as stepping stones to a Grid architecture? Why or why not?

HAYNOS: I don't necessarily see it that way. In some sense, Grid and virtualization are distinct thoughts from SOA. SOA is more of a business and application thought: how you decompose processes into constituent services and tasks. Grid and virtualization are infrastructure thoughts, how your resources and your infrastructure management supports the dynamic nature of SOAs and how you match resources — either execution engines or information — to services and composite applications.

There's been a lot of talk about convergence of Grid and SOA. I think this might be the wrong way of looking at it. Certainly, a lot of Grid middleware is built in a service-oriented way and style, and you can argue that Grid is SOA, but the more powerful notion is how infrastructures based on principles of virtualization and Grid support what organizations are trying to do with SOA. They are really very complementary, and the smart firms are realizing that virtualization and Grid, as infrastructure capabilities, are the perfect foundation for SOA.

Gt: How has being associated with virtualization, SOA and other related technologies helped Grid computing move beyond its image of being strictly about compute power?

HAYNOS: I think that it has helped tremendously. Having worked in the “Grid” space for over three years, I've seen Grid move from a niche associated with high-performance computing to more of an enterprise thought. Organizations realize that this abstraction — or virtualization — of workload and information across the “enterprise” is a very powerful concept and one that aligns perfectly with SOA.

We always believed that, as time proceeded, Grid would cease to exist as a four letter word, and that it was just the natural way to do distributed computing. The progression and adoption of SOAs and the associated standards, which cannot be understated, have moved Grid beyond its early image of strictly being about compute power to being a much more broad infrastructure thought in support of the SOA principles of integrating people, processes and information.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Discovering Alternative Solar Panel Materials with Supercomputing

May 23, 2020

Solar power is quickly growing in the world’s energy mix, but silicon – a crucial material in the construction of photovoltaic solar panels – remains expensive, hindering solar’s expansion and competitiveness wit Read more…

By Oliver Peckham

Nvidia Q1 Earnings Top Expectations, Datacenter Revenue Breaks $1B

May 22, 2020

Nvidia’s seemingly endless roll continued in the first quarter with the company announcing blockbuster earnings that exceeded Wall Street expectations. Nvidia said revenues for the period ended April 26 were up 39 perc Read more…

By Doug Black

TACC Supercomputers Delve into COVID-19’s Spike Protein

May 22, 2020

If you’ve been following COVID-19 research, by now, you’ve probably heard of the spike protein (or S-protein). The spike protein – which gives COVID-19 its namesake crown-like shape – is the virus’ crowbar into Read more…

By Oliver Peckham

Using HPC, Researchers Discover How Easily Hurricanes Form

May 21, 2020

Hurricane formation has long remained shrouded in mystery, with meteorologists unable to discern exactly what forces cause the devastating storms (also known as tropical cyclones) to materialize. Now, researchers at Flor Read more…

By Oliver Peckham

Lab Behind the Record-Setting GPU ‘Cloud Burst’ Joins [email protected]’s COVID-19 Effort

May 20, 2020

Last November, the Wisconsin IceCube Particle Astrophysics Center (WIPAC) set out to break some records with a moonshot project: over a couple of hours, they bought time on as many cloud GPUS as they could – 51,000 – Read more…

By Staff report

AWS Solution Channel

Computational Fluid Dynamics on AWS

Over the past 30 years Computational Fluid Dynamics (CFD) has grown to become a key part of many engineering design processes. From aircraft design to modelling the blood flow in our bodies, the ability to understand the behaviour of fluids has enabled countless innovations and improved the time to market for many products. Read more…

HPC in Life Sciences 2020 Part 1: Rise of AMD, Data Management’s Wild West, More 

May 20, 2020

Given the disruption caused by the COVID-19 pandemic and the massive enlistment of major HPC resources to fight the pandemic, it is especially appropriate to review the state of HPC use in life sciences. This is somethin Read more…

By John Russell

HPC in Life Sciences 2020 Part 1: Rise of AMD, Data Management’s Wild West, More 

May 20, 2020

Given the disruption caused by the COVID-19 pandemic and the massive enlistment of major HPC resources to fight the pandemic, it is especially appropriate to re Read more…

By John Russell

Microsoft’s Massive AI Supercomputer on Azure: 285k CPU Cores, 10k GPUs

May 20, 2020

Microsoft has unveiled a supercomputing monster – among the world’s five most powerful, according to the company – aimed at what is known in scientific an Read more…

By Doug Black

AMD Epyc Rome Picked for New Nvidia DGX, but HGX Preserves Intel Option

May 19, 2020

AMD continues to make inroads into the datacenter with its second-generation Epyc "Rome" processor, which last week scored a win with Nvidia's announcement that Read more…

By Tiffany Trader

Hacking Streak Forces European Supercomputers Offline in Midst of COVID-19 Research Effort

May 18, 2020

This week, a number of European supercomputers discovered intrusive malware hosted on their systems. Now, in the midst of a massive supercomputing research effo Read more…

By Oliver Peckham

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

Wafer-Scale Engine AI Supercomputer Is Fighting COVID-19

May 13, 2020

Seemingly every supercomputer in the world is allied in the fight against the coronavirus pandemic – but not many of them are fresh out of the box. Cerebras S Read more…

By Oliver Peckham

Startup MemVerge on Memory-centric Mission

May 12, 2020

Memory situated at the center of the computing universe, replacing processing, has long been envisioned as instrumental to radically improved datacenter systems Read more…

By Doug Black

In Australia, HPC Illuminates the Early Universe

May 11, 2020

Many billions of years ago, the universe was a swirling pool of gas. Unraveling the story of how we got from there to here isn’t an easy task, with many simul Read more…

By Oliver Peckham

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Tech Conferences Are Being Canceled Due to Coronavirus

March 3, 2020

Several conferences scheduled to take place in the coming weeks, including Nvidia’s GPU Technology Conference (GTC) and the Strata Data + AI conference, have Read more…

By Alex Woodie

Exascale Watch: El Capitan Will Use AMD CPUs & GPUs to Reach 2 Exaflops

March 4, 2020

HPE and its collaborators reported today that El Capitan, the forthcoming exascale supercomputer to be sited at Lawrence Livermore National Laboratory and serve Read more…

By John Russell

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This