Who’s Afraid of Grid Computing?

By Kelly Vizzini, Contributing Author

August 8, 2005

At the GRIDtoday VIP Summit in Chicago earlier last month, I gave a presentation that was a bit offbeat. Much is being written about the how Grid is a paradigm-shifting, barrier-breaking technology that is going to transform not just the data center, but the way enterprises develop and deploy applications. And while the marketer in me appreciates such boundless enthusiasm (and the scads of news coverage the topic generates), I thought it would be a productive session to look at the topic from another angle. Why are more companies not adopting Grid? Or, why are they not adopting more of it, faster?

What follows here is a narrative of the five reasons presented during that session.

1. Lack of Understanding

For fans of “Saturday Night Live,” you may recall a skit with Dan Aykroyd and Gilda Radner as a couple debating the benefits of a new product called “Shimmer,” which — according to the sales rep, Chevy Chase — was both a dessert topping and a floorwax.

Even those of you who never saw the original may be familiar with this vignette, especially if you've been in the technology business long enough. What is it — a dessert topping or a floorwax? It's a pop culture reference often used when products don't fit neatly into one category. Given how much airtime is used defining Grid, it's an analogy that's quite apropos.  “Well, it's a cluster, it's a Grid, it's virtual infrastructure … ” and the list goes on.

When we were at the last GRIDtoday Summit in London this May, we spent time the first morning debating the differences between clusters and Grids (and the implied value proposition of each.) Admittedly, it was a slightly painful discussion. John Hurley of Boeing gave a brilliant talk about the reality that enterprises don't care what we call it as long as we can clearly articulate what this technology does for the enterprise.

As distributed computing has evolved, many catch-phrases have been used, especially as marketing machines continue pumping millions of dollars into propagating each unique label. For some companies, it's software. For others, it's hardware or services. And sometimes, it's a vision or a brand that encompasses all three. But while vendors develop new buzzwords in the hopes of creating a market distinction and — we hope — a market advantage, in the end, what we've really created is confusion.

Without question, if our buyers — the users — don't have a common language to discuss problems and solutions, it slows things down. This confusion perpetuates a lack of understanding about this technology. At DataSynapse, 18 months ago, the questions we were fielding during evaluations centered more around “What is Grid?” As the market matured, the questions have shifted to: What does it do, exactly? What will the impact be? Why do I want it? And probably most frequently these days, “How do I get started?!”

To address this new need for customer understanding and action, it's imperative to steer conversations toward the problems Grid can solve, including proven examples of what this technology can do for their businesses.

2. Resistance to Change

Another hurdle that can't be discounted is the natural resistance to change that exists within the enterprise. Grid evangelists sometimes encounter the attitude that “good enough” is good enough. Interestingly, though, the old adage about not fixing things that aren't broken doesn't apply in this case because, while “broken” might be the wrong word to describe enterprise technology today, there is pain within the enterprise when application performance, scale and reliability issues arise. But still, it's difficult to battle inertia and to get folks to embrace new ways of solving old problems. This is because — shock of shocks — new technology requires new skill sets to deploy and support it.

For folks who've spent years building intricate “plumbing,” the care and feeding legacy distributed systems often require can translate into job security, even if those homegrown solutions are not be getting the job done as efficiently or effectively as possible. And, lastly, change often equals risk. Proponents of Grid must be able to articulate the risk/rewards scenario and the expected impact of a successful Grid implementation

The fact of the matter is, Grid represents both an evolution and a revolution. We all acknowledge that most enterprises have been doing some form of distributed computing for years. So perhaps, implementing Grid is merely an evolution from homegrown to packaged technology, so enterprises can redeploy IT resources — away from “minding the infrastructure” and onto other value-add projects.

And yet, the impact of this technology — up and down the entire stack — means that it is also revolutionary. Why? Because it has the power to potentially change the way enterprises buy and deploy software and hardware, and, ultimately, the way they manage a service-oriented enterprise.

3. Cultural Impact

Closely related to “resistance to change,” the fear of the unknown prevents many a journey. Because it's not well understood, cultural impact is one of the more widely reported inhibitors to Grid adoption.

As Grid software breaks down the silos that exist between applications and business units, the simple fact is that people have to learn to share. Grid delivers the power to distribute application service requests across a pool of shared resources that are dynamically expanding and contracting according to business demand — regardless of who owns those systems or where they're located.

The technology exists, but enterprises are simply not set up that way. If one business unit pays for those resources, there's a proprietary sense of “Why should I share? Let them go pay for their own.” Often referred to as “server-hugging,” this is one of the most common sticking points cited early in Grid software evaluations. Even if the resistance to sharing is overcome, there are still other questions to answer.

Users often ask, “How do I know that, if I share, I'll still get what I need done, when I need it?” What's lacking is the sense of trust in the Grid's ability to guarantee execution of service requests based on policy, priority and user-defined business rules.

4. Technology Impact

Though many companies have already started adopting Grid, there are still many questions around where the technology fits within the IT landscape. How will it impact current and planned infrastructure? Most significantly, what applications fit on the Grid? Which make sense and which don't?

For example, during our implementations, applications are assessed based on multiple criteria (e.g., unit of work, I/O requirements, whether the workload is synchronous/asynchronous, stateless/stateful, etc.). Applications are then plotted in a quadrant that maps ease of integration against business value.

Application Roadmap

In Figure 1, applications that fall into the green quadrant (the low-hanging fruit) are often characterized as computationally intense or HPC. They represent the most significant pain points, and because they often have work that is “easily parallelizable,” Grid-enabling them is somewhat straightforward. Unfortunately, the perception exists today that Grid is only good for HPC applications. While it is an obvious and easy place to start for most enterprises, it doesn't represent the sum total of opportunity for Grid within an enterprise.

There are two other hot-buttons that fall under the heading of Technology Impact: standards and security.

Standards are evolving, but slowly. Because of the overlap with so many other technologies like Web services, SOA and traditional distributed computing, a number of standards bodies are developing standards related to Grid computing including the W3C, OASIS, IETF, DMTF, WS/I, EGA, GGF and others. While it is not practical for vendors to support all of the standards in the space, a combination of industry adoption and standards maturity will eventually clear away some of the confusion.

Security also gets a lot of airtime, especially in situations for which the enterprise is deploying Grid across its desktops. In a shared environment like this, IT must be able to reassure users that the only thing being scavenged is processing cycles — not proprietary, business critical information.

5. Software Licensing

Although this topic could be logically grouped under “Technology Impact,” it's important enough to deserve it's own place on the top five list. Arguably, software licensing is probably the most-talked-about reason (right behind the cultural inhibitors) to explain why companies are slow to adopt Grid.

In a recent and comprehensive report on software licensing, the451 Group asserts: “As [enterprises] evolve into using Grids as more mainstream technology, the restrictions of current software licensing will become an even greater obstacle.”

It's a pretty succinct summation of the limitations that current licensing practices (per CPU, per seat, per user) place on Grid adoption. Without question, the new computing models will require new licensing models. Grid is just one of many catalysts spurring this dialogue.

While much has been reported about how ISVs are uninterested or unwilling to address Grid, there is progress. A growing list of ISVs have embraced Grid because it's a way to boost customer satisfaction (e.g., Algorithmics, Calypso, Milliman, Reuters, etc.). In some cases, they're announcing OEM agreements that embed Grid capabilities in their software to offer out-of-the-box integration to their install base — and all the inherent benefits in improved application performance that come with it.

Summary

So, who's not afraid of Grid computing? Actually, there is a prestigious and growing list of global firms — many of which are household brand names — that are willing to speak publicly about the significant and measurable value they are deriving from Grid. Moreover, these are companies that, in many cases, are expanding the size and scope of their existing implementations to move toward enterprise Grids — virtualizing multiple applications across multiple lines of businesses and geographies. The case studies are out there — at events like the GRIDtoday VIP Summits and the upcoming GridWorld — and anyone who cares to can who is utilizing Grid and how.

About Kelly Vizzini

As chief marketing officer at DataSynapse, Kelly Vizzini works to leverage the company's existing successes and domain expertise to build a brand identity that positions DataSynapse as the de facto standard in the U.S. and European markets for distributed computing solutions. Prior to her role at DataSynapse, Vizzini held marketing positions at several software companies including Prescient, Optum, Metasys and InfoSystems. She holds a bachelor's degree in journalism and communications from the University of South Carolina.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Mira Supercomputer Enables Cancer Research Breakthrough

November 11, 2019

Dynamic partial-wave spectroscopic (PWS) microscopy allows researchers to observe intracellular structures as small as 20 nanometers – smaller than those visible by optical microscopes – in three dimensions at a mill Read more…

By Staff report

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quantum annealing) – ion trap technology is edging into the QC Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. That’s the conclusion drawn by the scientists and researcher Read more…

By Jan Rowell

What’s New in HPC Research: Cosmic Magnetism, Cryptanalysis, Car Navigation & More

November 8, 2019

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Machine Learning Fuels a Booming HPC Market

November 7, 2019

Enterprise infrastructure investments for training machine learning models have grown more than 50 percent annually over the past two years, and are expected to shortly surpass $10 billion, according to a new market fore Read more…

By George Leopold

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Atom by Atom, Supercomputers Shed Light on Alloys

November 7, 2019

Alloys are at the heart of human civilization, but developing alloys in the Information Age is much different than it was in the Bronze Age. Trial-by-error smelting has given way to the use of high-performance computing Read more…

By Oliver Peckham

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quant Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. Th Read more…

By Jan Rowell

MLPerf Releases First Inference Benchmark Results; Nvidia Touts its Showing

November 6, 2019

MLPerf.org, the young AI-benchmarking consortium, today issued the first round of results for its inference test suite. Among organizations with submissions wer Read more…

By John Russell

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed ins Read more…

By Tiffany Trader

Nvidia Launches Credit Card-Sized 21 TOPS Jetson System for Edge Devices

November 6, 2019

Nvidia has launched a new addition to its Jetson product line: a credit card-sized (70x45mm) form factor delivering up to 21 trillion operations/second (TOPS) o Read more…

By Doug Black

In Memoriam: Steve Tuecke, Globus Co-founder

November 4, 2019

HPCwire is deeply saddened to report that Steve Tuecke, longtime scientist at Argonne National Lab and University of Chicago, has passed away at age 52. Tuecke Read more…

By Tiffany Trader

Spending Spree: Hyperscalers Bought $57B of IT in 2018, $10B+ by Google – But Is Cloud on Horizon?

October 31, 2019

Hyperscalers are the masters of the IT universe, gravitational centers of increasing pull in the emerging age of data-driven compute and AI.  In the high-stake Read more…

By Doug Black

Cray Debuts ClusterStor E1000 Finishing Remake of Portfolio for ‘Exascale Era’

October 30, 2019

Cray, now owned by HPE, today introduced the ClusterStor E1000 storage platform, which leverages Cray software and mixes hard disk drives (HDD) and flash memory Read more…

By John Russell

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated mo Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This