The Scalability Dilemma and the Case for Decoupling

By Justin Y. Shi, Temple University

March 30, 2016

The need for extreme scale computing is driven by the seemingly forever fledgling Internet. In abstract, the entire network is already an extreme scale computing engine. The technical difficulty, however, is to harness the dispersed computing powers for a single purpose. An analogy to this would be to build an engine capable of harnessing the combustive power of elements to move people or things. The presence of such an engine could drive transformative changes in technology, society and the economy.

The first requirement for such an extreme scale computing engine is the ability to gain incrementally better performance and reliability while concurrently expanding in size. We expect more from this engine than we do from a sports car. The “cost of doing business” should only include oil changes, tire and bearing replacements, but not re-building the car when a tire bursts or the engine upgrades. Unlike sports cars, technically, the extreme scale computing engine should run faster and more reliably when it expands for solving a bigger problem. While the top deliverable performance of the engine must be capped by the aggregate of available capabilities, there should be no loss in an application’s reliability when expanding in size.

Reliable distributed computing is hard. A 1993 paper entitled “The Impossibility of Implementing Reliable Communication in the Face of Crashes”[i] drew a “line in the sand.” It was proved that given a pair of sender and receiver, reliable communication between them is impossible if either one or the other could crash arbitrarily. It follows immediately that any distributed or parallel application that depends on fixed program-processor bindings must face the increased risk of crashes when the application expands, namely the “scalability dilemma.”

ImpossibilityProof 800xThe corollary of the impossibility proof is that reliable failure detection is also impossible. Thus, fault detection/repair/reschedule schemes are technically flawed for extreme scale computing. In this context, “reliability” means “100% application reliability while the system affords greater than the minimal survivable resource set.” For any computing or communication application, the “minimal survivable resource set” includes “at least one viable resource at every critical path at the time of need.”

Ironically, the possibility of such a highly reliable system using faulty networks was also proved by the same authors[ii]. Today’s Internet is a feasibility study of the correctness of this proof. These two complementary studies somehow seem contradictory to most people. This confusion may be rooted in a widespread faulty assumption in distributed computing communities: the “virtual circuit.” It is widely taught and believed that a virtual circuit is “a reliable, lossless data transmission channel between two communicating programs.” Historically, this term was first created by the network communities to signify a clean “hand-off point” for computing communities. The trouble was that the computing professionals took the liberty to expand the virtual circuit definition to include the reliability of the communicating programs.

This was an unfortunate mistake. It crossed the “line in the sand.” This problem was quickly identified as the first fallacy – “the network is reliable” — in the “Eight Fallacies of Distributed Computing”[iii]. However, in the last three decades, the industry and research communities have continued to ignore the warning signs despite increasing service downtimes and data losses in today’s large scale distributed systems (including all mission critical applications and HPC applications).

The Stateless Parallel Processing (“SPP”) concept [iv]was conceived in the mid-1980s based on a practical requirement of a mission critical project called “Zodiac.” The requirement was very basic: Keep a distributed application running regardless partial component failures. It was inconceivable for national security to rely on any mission critical application that could crash on a single component failure. Technically speaking, mission critical programs and data must be completely decoupled from processing, communication and storage devices. Otherwise, any device failure can halt the entire application and expanding the processing infrastructure will inevitably result in a higher probability of service interruptions, data losses, and runaway maintenance costs. HPC applications are the first non-lethal applications to demonstrate these potentially disastrous consequences. The growing instabilities in large scale simulations have also already played a role in the investigation of the scientific computing reproducibility problems[v].

Methods for building completely decoupled applications are fundamentally different from those for “bare metal” applications. The first difference is in the design of Application Programming Interface (“API”). Technically, Remote Procedure Call (“RPC”), Message Passing Interface (“MPI”), share memory (“OpenMP”), and Remote Method Invocation (“RMI”) are all “bare metal”-inspired APIs. Applications built using these APIs force the runtime systems to generate fixed program-processor dependencies. They have crossed the “line in the sand.” The computing application scalability dilemma is unavoidable.

The <key, value=””>-based APIs, such as Hadoop, Spark, and Scality, aimed to relax the program/data-device dependency by allowing the runtime system to conduct failure detection/repair “magic.” These efforts have already shown significant scalability gains against “bare metal” approaches. Unfortunately, due to the influence of the “virtual circuit” concept, their runtime implementations have also crossed the “line in the sand.” The natural next step is to completely decouple devices from programs and data.

As the “Internet of Things” takes afoot, the “smart big sensing” challenge is on the horizon. In this context, an extreme scale computing engine is merely a necessity for survival. The existing distributed and parallel computing technologies are woefully inadequate.

Fundamentally, all electronics will fail in unexpected ways. “Bare metal” computing was important decades ago but detrimental to large scale computing. It is simply flawed for extreme scale computing.

Albert Einstein defined “Insanity” as doing “the same thing over and over again and expecting a different result”. Without a paradigm shift, we can continue to call anything “extreme scale” while secretly keeping the true extreme scale engine in our dreams.

References

[i] Alan Fekete, Nancy A. Lynch, Yishay Mansour, John Spinelli, “The Impossibility of Implementing Reliable Communication in the Face of Crashes,” Journal of the ACM, 1993.

[ii] John Spinelli, “Reliable Data Communication in Faulty Computer Networks.” Ph.D. dissertation. Dept. Elect. Eng. Comput. Sci., Massachusetts Institute of Technology, Cambridge, Mass., and MIT Laboratory for Information and Decision Systems report LIDS-TH-1882, June 1984.

[iii] Peter Deutsch, “Eight Fallacies of Distributed Computing,” http://www.ibiblio.org/xml/slides/acgnj/syndication/cache/Fallacies.html

[iv] Justin Shi, “Stateless Parallel Processing Prototype: Synergy”. https://github.com/jys673/Synergy30

[v] XSEDE 2014 Reproducibility Workshop Report, “Standing Together for Reproducibility in Large-Scale Computing”. https://www.xsede.org/documents/659353/d90df1cb-62b5-47c7-9936-2de11113a40f

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Nvidia Aims Clara Healthcare at Drug Discovery, Imaging via DGX

April 12, 2021

Nvidia Corp. continues to expand its Clara healthcare platform with the addition of computational drug discovery and medical imaging tools based on its DGX A100 platform, related InfiniBand networking and its AGX develop Read more…

Nvidia Serves Up Its First Arm Datacenter CPU ‘Grace’ During Kitchen Keynote

April 12, 2021

Today at Nvidia’s annual spring GPU technology conference, held virtually once more due to the ongoing pandemic, the company announced its first ever Arm-based CPU, called Grace in honor of the famous American programmer Grace Hopper. Read more…

Nvidia Debuts BlueField-3 – Its Next DPU with Big Plans for an Expanded Role

April 12, 2021

Nvidia today announced its next generation data processing unit (DPU) – BlueField-3 – adding more substance to its evolving concept of the DPU as a full-fledged partner to CPUs and GPUs in delivering advanced computi Read more…

Nvidia’s Newly DPU-Enabled SuperPOD Is a Multi-Tenant, Cloud-Native Supercomputer

April 12, 2021

At GTC 2021, Nvidia has announced an upgraded iteration of its DGX SuperPods, calling the new offering “the first cloud-native, multi-tenant supercomputer.” The newly announced SuperPods come just two years after the Read more…

Tune in to Watch Nvidia’s GTC21 Keynote with Jensen Huang – Recording Now Available

April 12, 2021

Join HPCwire right here on Monday, April 12, at 8:30 am PT to see the Nvidia GTC21 keynote from Nvidia’s CEO, Jensen Huang, livestreamed in its entirety. Hosted by HPCwire, you can click to join the Huang keynote on our livestream to hear Nvidia’s expected news and... Read more…

AWS Solution Channel

Volkswagen Passenger Cars Uses NICE DCV for High-Performance 3D Remote Visualization

 

Volkswagen Passenger Cars has been one of the world’s largest car manufacturers for over 70 years. The company delivers more than 6 million automobiles to global customers every year, from 50 production locations on five continents. Read more…

The US Places Seven Additional Chinese Supercomputing Entities on Blacklist

April 8, 2021

As tensions between the U.S. and China continue to simmer, the U.S. government today added seven Chinese supercomputing entities to an economic blacklist. The U.S. Entity List bars U.S. firms from supplying key technolog Read more…

Nvidia Serves Up Its First Arm Datacenter CPU ‘Grace’ During Kitchen Keynote

April 12, 2021

Today at Nvidia’s annual spring GPU technology conference, held virtually once more due to the ongoing pandemic, the company announced its first ever Arm-based CPU, called Grace in honor of the famous American programmer Grace Hopper. Read more…

Nvidia Debuts BlueField-3 – Its Next DPU with Big Plans for an Expanded Role

April 12, 2021

Nvidia today announced its next generation data processing unit (DPU) – BlueField-3 – adding more substance to its evolving concept of the DPU as a full-fle Read more…

Nvidia’s Newly DPU-Enabled SuperPOD Is a Multi-Tenant, Cloud-Native Supercomputer

April 12, 2021

At GTC 2021, Nvidia has announced an upgraded iteration of its DGX SuperPods, calling the new offering “the first cloud-native, multi-tenant supercomputer.” Read more…

Tune in to Watch Nvidia’s GTC21 Keynote with Jensen Huang – Recording Now Available

April 12, 2021

Join HPCwire right here on Monday, April 12, at 8:30 am PT to see the Nvidia GTC21 keynote from Nvidia’s CEO, Jensen Huang, livestreamed in its entirety. Hosted by HPCwire, you can click to join the Huang keynote on our livestream to hear Nvidia’s expected news and... Read more…

The US Places Seven Additional Chinese Supercomputing Entities on Blacklist

April 8, 2021

As tensions between the U.S. and China continue to simmer, the U.S. government today added seven Chinese supercomputing entities to an economic blacklist. The U Read more…

Habana’s AI Silicon Comes to San Diego Supercomputer Center

April 8, 2021

Habana Labs, an Intel-owned AI company, has partnered with server maker Supermicro to provide high-performance, high-efficiency AI computing in the form of new Read more…

Intel Partners Debut Latest Servers Based on the New Intel Gen 3 ‘Ice Lake’ Xeons

April 7, 2021

Fresh from Intel’s launch of the company’s latest third-generation Xeon Scalable “Ice Lake” processors on April 6 (Tuesday), Intel server partners Cisco, Dell EMC, HPE and Lenovo simultaneously unveiled their first server models built around the latest chips. And though arch-rival AMD may... Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers

Contributors

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

HPE Names Justin Hotard New HPC Chief as Pete Ungaro Departs

March 2, 2021

HPE CEO Antonio Neri announced today (March 2, 2021) the appointment of Justin Hotard as general manager of HPC, mission critical solutions and labs, effective Read more…

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

Microsoft, HPE Bringing AI, Edge, Cloud to Earth Orbit in Preparation for Mars Missions

February 12, 2021

The International Space Station will soon get a delivery of powerful AI, edge and cloud computing tools from HPE and Microsoft Azure to expand technology experi Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire