Grid, HPC and SOA: The Real Thing?

By By Labro Dimitriou, Contributing Author

May 23, 2005

How do we know when a new technology is the real thing or just a fad? Furthermore, how do we value the significance of a new technology, and when is new technology a tactical or a strategic decision? In this article, I will discuss why Grid and SOA are here to stay. I will also describe the technology “product stack” in order to identify strategic from tactical, and I will propose best practices techniques for securing ROI and offering resilience to change and early adoption risks.

Some would say that Grid and SOA are not revolutionary concepts, but rather evolutionary steps of enterprise distributed computing. Make no mistake, though: together, the technologies have the potential and the power to bring about a computing revolution. Grid and SOA may seem unrelated, but they are complementary notions with fundamentally the same technology underpinning and common business goals. They are service-based entities supporting the adaptive enterprise.

So, let's talk about the adaptive, or agile, enterprise and its characteristics. The only constant in today's business models is change. Constant change in the way of doing business exists either because the company is out of focus or because of new competitive pressures: today we are product-focused, tomorrow we are client-centric. Re-engineering the enterprise is no longer the final state, but more of an ongoing-effort. Consider six-sigma and the Business Process Management (BPM) initiatives. Integration is not an afterthought anymore; most systems are built with integration as a hard requirement. There are changes of underlying technology that are apparent across all infrastructures and applications. The fact that new hardware delivers more power for less money proves that Moore's law is still valid. And last, but most challenging, are the varying requirements of processing compute power. Clearly, over-provisioning can only lead to underutilization and overspending, both undesirable results.

Information systems have to support the adaptive enterprise. As David Taylor wrote in his book Business Engineering with Object Technology: “Information systems, like the business models they support, must be adaptive in nature.” Simply put, Information systems have two layers: software and hardware supporting and facilitating business requirements.

SOA decouples business requirements and presentation (user interface) from the core application. Thus, shielding the end-user from incremental changes and visa versa: localizing the effect of code change when requirements adapt to new business conditions.

Grid software decouples computing needs from hardware capacity. It inserts the necessary abstraction layer that not only protects the application from hardware change, but also provides horizontal scalability, predictability with guaranteed SLAs, fault tolerance by design and maximum CPU utilization.

SOA gave rise to the notion of the enterprise service bus, which can transform a portfolio of monolithic applications to a pool of highly parameterized service based components. A new business application can be designed by orchestrating a set of Web services already in production. Time to market for a new application can be reduced by orders of magnitude. Grid services virtualize compute silos suffering from under-performance or under-utilization and turns them into well-balanced, fully utilized enterprise compute backbones.

SOA provides an optimal path for a minimum cost re-engineering or integration effort for a legacy system. In many cases, legacy systems gain longevity by replacing a hard-wired interface with a Web services layer. The Grid toolkit can turn a legacy application that hit the performance boundaries of a large SMP box to an HPC application running on a farm of high-powered, low cost commodity hardware.

Consider a small to medium enterprise with three or four vertical lines of businesses (LOB) each requiring a few turnkey applications. The traditional approach would be to look at the requirements of each application in isolation, design the code and deploy on hardware managed by the LOB. What is wrong with that approach? Well, lines of businesses most certainly share a good number of requirements, which means the enterprise spends money doing many of the same things multiple times. And what about addressing computing demands to run the dozen or so applications? Each LOB has to do its own capacity management.

Keeping a business unit happy is a tight walk between under-provisioning and over-spending. SOA is an architectural blueprint that delivers on its promise of application reuse and interoperability. It provides a top to bottom approach in developing and maintaining applications. In this case, small domains of business requirements turn into code and are made available to the rest of the enterprise as a service.

Grid, on the other hand, is the ultimate cost-saving strategic tool. It can dynamically allocate the right amount of compute fabric to the LOB that needs it the most. In Grid's simplest form, the risk and analytics group can have near-time response to complex “what if” market scenarios during the day, and the back office can meet the critical global economy requirements by using most of the compute fabric during the night window, which is getting smaller and smaller.

Next, let's review the product stack. First, I need to make a distinction between High Performance Computing (HPC) and Grid. HPC is all about making applications to compute fast — and one application at a time, I might add. Grid software, at large, orchestrates application execution and manages the available hardware resource or the compute fabric. There is further distinction based on the geographic collocation of the compute resource (i.e., desktop computers, workgroup, cluster and Grid). Grid virtualizes one or more clusters, whether they are located on the same floor or half way around the world. In all cases, hardware can be heterogeneous and with different computing properties.

In this article, I refer to the available compute fabric as the Grid at large. HPC applications started on super computers, vector computers and SMP boxes. Today, Grid offers a very compelling alternative for executing HPC applications. By taking a serially executing application and chunking it into smaller components that can run simultaneously on multiple nodes, the compute fabric, you can potentially improve the performance of an application by a factor of N, where N is the number of CPUs available on the compute fabric. Not bad at all, but admittedly there is a catch. Finding the parallelization opportunity or chunking is not always a trivial task and may require major re-engineering. That sounds invasive and costly, and the last thing one wants is to make logic changes to an existing application, adapt a new programming paradigm, hire expensive niche expertise and embark on one-off development cycles taking time away time form core business competence.

They good news is that several HPC design patterns are emerging. In short, there are three high-level parallelization patterns: domain decomposition, functional decomposition and algorithmic parallelization. Domain decomposition, also known as “same instructions, different data” or “loop level parallelization,” provides a simple Grid-enablement process. It requires that the application is adapted to run on smaller chunks of data (e.g., if you have a loop that iterates 1 million times doing the same computation on different data, the adapter can chunk the loop into, say, 1,000 ranges and do the same computation using 1,000 CPUs at the same time in parallel). OpenMP's “#pragma omp parallel” is a pre-compiler adapter supporting domain decomposition.

Functional decomposition comes in many flavors. The most obvious flavor is probably running in your back-office batch cycle: a set of independent executables readily available to run from the command line. In its more complex variety, it might require minimum instrumentation or adaptation of the serial code.

Algorithmic parallelization is left for very specific domain problems and usually combines functional and domain decomposition techniques. Such examples include HPC solvers for Partial Differential Equation, recombining trees for stochastic models and global unconstrained optimization required for a variety of business problems.

So, here is the first and top layer of the product stack: the adaptation layer. Applications need an non-invasive way to run on a Grid. This layer provides means that map the serial code to parallel executing components. A number of toolkits with available APIs are coming to market with a varying degree of abstraction and integration effort. Clearly, different types of algorithms and applications might need a different approach. Therefore, a tactical solution may be required. Whatever the approach, you want to avoid logic change of existing code and use a high level paradigm that encapsulates the rigors of parallelization. In addition, you should look for a toolkit that comes with a repeatable best practices process.

To introduce the next two layers, consider the requirements for sharing data and communicating results among the decomposed chunks of work. Shared data can be either static or intermediate computed results. In the case of static data, a simple NFS type of solution or a database access will suffice. But if the parallel workers need to exchange data, distributed data shared memory services might be required. So, the next layer going down the stack provides data transparency and data virtualization across the Grid. Clearly, it is a strategic piece of the puzzle, and high performance and scalability is critical for the few applications that need these qualities of services.

Communication among workers gives way to the classic middleware layer. One word of advice: make sure that your application is not exposed to any direct calls of the middleware, unless, of course, you have time to develop and debug low level messaging code. Better yet, make sure you don't have anything to do with middleware calls and that the application stack provides you with a much higher API abstraction.

So, you've developed your SOA HPC applications and all the LOBs are lining-up to use the compute fabric. How do you make sure that applications compute in a predictable fashion and within a predetermined timelines? How do you assure the horizontal scalability, reliability and high availability? This brings us to the most important part of the stack — the Grid software. The Grid software provides all the quality of services that make the product stack industrial-strength and mission-critical-ready: workload and resource management; SLA based ownership of resources; fail-over; cost-accounting; operational monitoring for 24×7 enterprises; horizontal scalability; and maximum use of compute capacity. The core of this layer implements an open policy-driven distributed scheduler.

A word of caution: resist the temptation to roll out your own solution. Just answer this: If you were to implement a J2EE application, would you write your own application server? A last word of advice: as rapdily as standards are evolving and products are maturing, it is important to pick your vendors wisely. Get a vendor that will be around tomorrow and that has the technical expertise your enterprise will need to extend the product and support your 24×7 operations.

Technologies cannot exist without real business benefits — we've tried this back in the dot.bom days, right? Clearly, SOA and the Grid software stack are mature, address real, tangible business benefits, and fully support the adaptive enterprise and the pragmatic reality of change. The beauty of a Grid and SOA implementation is that it does not have to be a big-bang approach to bring benefits. Start with your batch cycle, the time-consuming custom-built market risk application or the Excel spreadsheet running at the trader desk that takes 12 hours to complete. Then, instrument your first HPC and take advantage of idle CPU cycles, or transition an application from an expensive SMP machine to commodity hardware. You will immediately see ROI and business benefits. Be prepared for the unpredictable volume spikes that business growth opportunities bring with them.

Until next time: get the Grids crunching.

About Labro Dimitriou

Labro Dimitriou is a subject matter expert in HPC and Grid. He has been in the fields of distributed computing, applied mathematics and operations research for over 23 years, and has developed commercial software for trading, engineering and geosciences. Dimitriou has spent the last four years designing enterprise HPC and Grid solutions in finance and life science. He can be reached via e-mail at [email protected].

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire