Deja Vu All Over Again?

By John Hurley

April 12, 2010

It appears the adage that “The more things change, the more things stay the same” is not far off the mark. Distributed computing will be heretofore considered as the foundation from which the “newer” technologies of grids, fabrics, and clouds have naturally evolved. This premise will be carried forward as the fundamental basis for present and future discussions in this sphere.

Distributed systems can be defined as “a collection of independent computers that appears as a single coherent system” [1]. Advancements in computing and information technology have led to better software, hardware and networks functioning in ways that enable users to do significantly more than in the past. The advances naturally prompt users to push the boundaries for expanded capabilities. An inherent broadening of the scope and types of applications that can be addressed, as well as, a reduction of the boundaries and limitations on applications are a natural outcome of the progress noted above. For example, the promise of multi-core programming and its multi-processor systems has enabled the “distributed” nature of distributed systems to be no longer limited to individual, independent computers, but now the distribution can be observed within a single computing system. Multi-core architectures, through the use of software multithreading helps facilitate better performance, especially within the realm of enhancing speed [2]. Although the appropriate software is a critical factor in the speed up, it is the proper design of the applications in a threaded environment that is the key to the faster processing. We will return to the merits of focusing attention on the applications and their respective designs later.

It is important to note some of the characteristics of distributed systems, including:

• Differences between the various computers and the ways in which they communicate and are hidden from users;

• The internal organization of the distributed system is hidden from users;

• Users and applications can interact with a distributed system in a consistent and uniform way, regardless of where and when interactions take place;

• Solutions should be easy to expand or scale [1].
The characteristics noted above are, not too surprisingly, the focus of many of the “new” technologies that have enlisted our attention and discussions, e.g., with:

• Grids, the focus is on sharing resources from multiple administrative domains for a common goal;

• Fabrics, the focus is on the integration or connection of nodes of resources to facilitate consolidated computing; and last but certainly not least;

• Clouds, the focus is on shared resources that are available on an “as-needed” or “on demand” basis.

The author uses the parallels in characteristics noted above to reinforce the view that the overlap is unavoidable between the “newer” concepts and distributed computing because of the inherent nature and promise of distributed computing.  However, in spite of the similarities and parallels, there are indeed some very important and interesting distinctions with the “feel” of cloud computing.

The journey travelled by grids and clouds tend to have taken very different routes. The industry sector lagged behind in the development and implementation of grids and was constantly searching for the “right” business niche and potential ROI to validate any significant investment in the technology. SLAs, accountability and responsibility were always major points of contention in the cooperative sharing that was promoted by grid technologies. For example, the defining of the entity, person(s), or organization(s) responsible for data (i.e., caretakers of data within the respective shared environment) were always ongoing issues that had to be negotiated. No one, for obvious reasons, was anxious to be held responsible for data being corrupted, inaccessible, or unsecured because of IP and legal issues that could be incredibly expensive to companies and collaborators to work through to the mutual agreement of all. The ability to accommodate grids within an overall mainstream, comprehensive strategy has also been a continuous challenge for IT administrators and their user communities.

Of special note is the fact that the industry sector not only got in on the ground floor with the development and implementation of clouds, but more than any other sector, is noted for actually  leading the movement. The prospective business niches were inherent for clouds from the beginning and did not have to be developed along the way as was the case for grids. A major barrier for grids that is notably less with respect to clouds is that there were always challenges associated with the inherent sharing facilitated by Grids, i.e., the cost structure relevant to the sharing between the participants. In addition, clouds have the added dimension of variability in its possible structure, including they may exist in either private, public or hybrid configurations—an option much more limited for grid environments.

We have focused our attention on the distinctions especially between clouds and grids, but also would like to consider some of the similarities that may help us better understand and utilize the technologies to more efficiently address our applications and challenges. The manner in which data is secured to better address issues of privacy, integrity and access remain a keen interest and concern. Never more dominant is the concern within the intelligence community, where real-time, secure, and accurate exchanges can be matters of national and global security. It is important to note as alluded to earlier that our focus will be on scientific applications, wherein large-scale volumes of data are a natural part of the problems that are to be addressed. It is also within the author’s view that the scientific applications are important because they really test the bounds and limitations of our present resources and require innovative and creative solutions to resolve the challenges. We would hope that the solutions could be extended to also benefit other applications. In addition, we stated previously that the design of the application should be viewed as a significant factor as we gain to learn more while improving our understanding of ways to improve performance and solutions. The applications are also important because they drive not only the resources needed to facilitate the results and information, but also the resolution in the outcomes and ultimately, the progress we should hope to achieve.

It is imperative that we reiterate our focus on the data generated by the applications and our intent to navigate through the many different areas/classes of scientific applications that the user community brings forward. Together, we hope to identify and share some of the challenges, progress, approaches, and lessons learned in addressing a wide range of applications. The goal is to have an interactive dialogue and exchange that facilitates an improvement and better understanding of HPC applications “In the Cloud”.

[1] Tanenbaum, A.S.  and Steen, M. V. , Distributed Systems: Principles and Paradigms, p.2, Prentice Hall, 2002

[2] Akhter, S. and Roberts, J., Multi-core Programming: Increasing Performance through Software Multithreading

—-

HPCintheCloud contributor John Hurley is the Principle Investigator and Director for the National Nuclear Security Administration and DOE sponsored Center for Disaster Recovery.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

EuroHPC Expands: United Kingdom Joins as 35th Member

May 14, 2024

The United Kingdom has officially joined the EuroHPC Joint Undertaking, becoming the 35th member state. This was confirmed after the 38th Governing Board meeting, and it's set to enhance Europe's supercomputing capabilit Read more…

Linux Foundation Announces the Launch of the High-Performance Software Foundation

May 14, 2024

The Linux Foundation, the nonprofit organization enabling mass innovation through open source, is excited to announce the launch of the High-Performance Software Foundation (HPSF). The announcement was made at the ISC Read more…

Nvidia Showcases Work with Quantum Centers at ISC24

May 13, 2024

With quantum computing surging in Europe, Nvidia took advantage of ISC24 to showcase its efforts working with quantum development centers. Currently, Nvidia GPUs are dominant inside classical systems used for quantum sim Read more…

ISC24: Hyperion Research Predicts HPC Market Rebound after Flat 2023

May 13, 2024

First, the top line: the overall HPC market was flat in 2023 at roughly $37 billion, bogged down by supply chain issues and slowed acceptance of some larger systems (e.g. exascale), according to Hyperion Research’s ann Read more…

Top 500: Aurora Breaks into Exascale, but Can’t Get to the Frontier of HPC

May 13, 2024

The 63rd installment of the TOP500 list is available today in coordination with the kickoff of ISC 2024 in Hamburg, Germany. Once again, the Frontier system at Oak Ridge National Laboratory in Tennessee, USA, retains its Read more…

Harvard/Google Use AI to Help Produce Astonishing 3D Map of Brain Tissue

May 10, 2024

Although LLMs are getting all the notice lately, AI techniques of many varieties are being infused throughout science. For example, Harvard researchers, Google, and colleagues published a 3D map in Science this week that Read more…

Shutterstock 493860193

Linux Foundation Announces the Launch of the High-Performance Software Foundation

May 14, 2024

The Linux Foundation, the nonprofit organization enabling mass innovation through open source, is excited to announce the launch of the High-Performance Softw Read more…

ISC24: Hyperion Research Predicts HPC Market Rebound after Flat 2023

May 13, 2024

First, the top line: the overall HPC market was flat in 2023 at roughly $37 billion, bogged down by supply chain issues and slowed acceptance of some larger sys Read more…

Top 500: Aurora Breaks into Exascale, but Can’t Get to the Frontier of HPC

May 13, 2024

The 63rd installment of the TOP500 list is available today in coordination with the kickoff of ISC 2024 in Hamburg, Germany. Once again, the Frontier system at Read more…

ISC Preview: Focus Will Be on Top500 and HPC Diversity 

May 9, 2024

Last year's Supercomputing 2023 in November had record attendance, but the direction of high-performance computing was a hot topic on the floor. Expect more of Read more…

Illinois Considers $20 Billion Quantum Manhattan Project Says Report

May 7, 2024

There are multiple reports that Illinois governor Jay Robert Pritzker is considering a $20 billion Quantum Manhattan-like project for the Chicago area. Accordin Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

How Nvidia Could Use $700M Run.ai Acquisition for AI Consumption

May 6, 2024

Nvidia is touching $2 trillion in market cap purely on the brute force of its GPU sales, and there's room for the company to grow with software. The company hop Read more…

Hyperion To Provide a Peek at Storage, File System Usage with Global Site Survey

May 3, 2024

Curious how the market for distributed file systems, interconnects, and high-end storage is playing out in 2024? Then you might be interested in the market anal Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Leading Solution Providers

Contributors

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Intel Plans Falcon Shores 2 GPU Supercomputing Chip for 2026  

August 8, 2023

Intel is planning to onboard a new version of the Falcon Shores chip in 2026, which is code-named Falcon Shores 2. The new product was announced by CEO Pat Gel Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

How the Chip Industry is Helping a Battery Company

May 8, 2024

Chip companies, once seen as engineering pure plays, are now at the center of geopolitical intrigue. Chip manufacturing firms, especially TSMC and Intel, have b Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire