HPC, the Cloud, and Core Competency

By Scott Clark

April 11, 2010

What does HPC have to do with cloud computing? Well, given that HPC environments are constantly growing, consume large quantities of fairly generic compute resources, and have both peaks and valleys in workload profiles, it would seem that HPC would be the perfect candidate for cloud computing, if only we could get past the barriers to adoption.

What I would like to do is present a series of blogs, intended to be a philosophical framing, not a technical roadmap, that will show why HPC is the perfect consumer of cloud computing. These blogs will be broken up into distinct topics in an attempt to create a logical progression aimed at having a common frame of reference. The initial set of blogs will address the barriers to adoption as follows:
 
1.      Ego – IT as a core competency
2.      Cost – getting more value for the same money
3.      Trust – a historical lesson
4.      Control – changes to organizational structure
5.      Security – perspectives on internal security
6.      Performance – realities of simultaneous optimization theory
 
Once we frame the barriers, we can then discuss incremental steps to get to value:
 
1.      Cloud enablement – transforming your environment, internal private cloud
2.      Private Clouds – external private clouds
3.      Hybrid Configurations – leveraging public clouds for appropriate workloads
4.      Public clouds – where and when they may make sense
 
This is the intended general direction, but I reserve the right to deviate based on input from the forum, any needed clarification, or recalibration necessary to stay true to intent of the site.
 
Having said that, let’s move into the first topic of discussion, IT as a core competency.
 
Companies need IT to be executed competently, and to control IT direction, but IT is not the primary product of the company (IT companies aside), and therefore should not be considered a core competency. We can debate tying into the primary function (core business) of the company as a criteria to determine core competency, but I believe it goes to the investment decision process of the company leadership. The primary drivers for the business revolve around delivering product to customers, development of new markets, and customer relationship management. When given the option of where to invest critical resources and assets in the business, executive management will be driven to primarily invest in the direction of the core business, and minimize expenses around all other aspects of the running of the business. Core competency would imply sufficient investment to differentiate the business from the rest of the world.
 
Further reinforcement of these concepts can be seen by looking at where IT is accounted for within the business. Quite commonly, IT is accounted as an SG&A function. This places it into the “overhead” bucket and it gets to compete with facilities, HR, accounting, purchasing, and all other groups that make up the SG&A bucket for the company in order to get resources. I only say this to frame the mindset of financial decisions. Given that companies are measured by how well they control expenses in SG&A (SG&A as a function of revenue), and that many of the components of the SG&A bucket are fixed or based on headcount, you then start to see that budgets for IT are scrutinized with a control oriented mindset, optimized on the cost variable. The R&D side is usually the “spend money to make money” side of the house, where SG&A is driven to control or even cut costs.  Having said that, I have also not met anyone who can flip between these mindsets.
 
In order to control costs as much as possible and to get as much value out of what is spent in the area of IT, most companies take the approach to limit change and hire resources with a breadth of skills as compared to a depth of skills in a specific area. They will attempt to limit change in order to get maximum value out of existing assets, maximize the ability to automate, and to minimize the quantity of personnel required to manage. By limiting change like this though, it defeats the ability of technology to ultimately deliver maximized value. Also, by limiting change, the organization is really promoting a philosophy of maintenance instead of development, and in doing that, many times symptoms will be addressed (just patch it up) instead of the root cause.
 
Additionally, by hiring generalists, the business accomplishes many things, like having the ability to solve any problem in the environment while minimizing overhead staff in addition to having the ability to have fault tolerance in personnel resources (people can take vacations, get sick, or leave for another position). The downside is that many times these generalist resources are attempting to mange the infrastructure, but lack the experience on new technologies that are brought in to properly manage them(they have not have the opportunity to gain experience). Solutions that they develop or integrate are more prone to configuration or design mistakes (doing it for the first time), are many times less efficient solutions than what is possible (not optimized), and are not designed to scale into the future with technologies that are not yet available to solve problems that have yet to surface. And finally, the complexity of the environment is growing faster than the capacity of the organization.
 
This is not to say that internal IT organizations are not excellent, that the personnel are not very talented, or that these organization don’t bring great value to the companies they work for. The only point is that there is more value that could be achieved, and that the company does not (and should not) invest in this function like they do the core product(s) of the company. How many times have we all sat in on meetings listening vendors explain to us what the “perfect solution” is, and knowing that they are right because we thought of it a long time ago, but just have not had the time, funding, resources, and priority to go execute that “perfectly”. Cloud computing has the promise to grant us access to that optimized, “perfect” solution, and next time, we will talk about getting that solution for the same price we are paying for IT today…
Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Harvard/Google use AI to help Produce Astonishing 3D Map of Brain Tissue

May 10, 2024

Although LLMs are getting all the notice lately, AI techniques of many varieties are being infused throughout science. For example, Harvard researchers, Google, and colleagues published a 3D map in Science this week that Read more…

ISC Preview: Focus Will Be on Top500 and HPC Diversity 

May 9, 2024

Last year's Supercomputing 2023 in November had record attendance, but the direction of high-performance computing was a hot topic on the floor. Expect more of that at the upcoming ISC High Performance 2024, which is hap Read more…

Processor Security: Taking the Wong Path

May 9, 2024

More research at UC San Diego revealed yet another side-channel attack on x86_64 processors. The research identified a new vulnerability that allows precise control of conditional branch prediction in modern processors.� Read more…

The Ultimate 2024 Winter Class Round-Up

May 8, 2024

To make navigating easier, we have compiled a collection of all the 2024 Winter Classic News in this single page round-up. Meet The Teams   Introducing Team Lobo This is the other team from University of New Mex Read more…

How the Chip Industry is Helping a Battery Company

May 8, 2024

Chip companies, once seen as engineering pure plays, are now at the center of geopolitical intrigue. Chip manufacturing firms, especially TSMC and Intel, have become the backbone of devices with an on/off switch. Thes Read more…

Illinois Considers $20 Billion Quantum Manhattan Project Says Report

May 7, 2024

There are multiple reports that Illinois governor Jay Robert Pritzker is considering a $20 billion Quantum Manhattan-like project for the Chicago area. According to the reports, photonics quantum computer developer PsiQu Read more…

ISC Preview: Focus Will Be on Top500 and HPC Diversity 

May 9, 2024

Last year's Supercomputing 2023 in November had record attendance, but the direction of high-performance computing was a hot topic on the floor. Expect more of Read more…

Illinois Considers $20 Billion Quantum Manhattan Project Says Report

May 7, 2024

There are multiple reports that Illinois governor Jay Robert Pritzker is considering a $20 billion Quantum Manhattan-like project for the Chicago area. Accordin Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

How Nvidia Could Use $700M Run.ai Acquisition for AI Consumption

May 6, 2024

Nvidia is touching $2 trillion in market cap purely on the brute force of its GPU sales, and there's room for the company to grow with software. The company hop Read more…

Hyperion To Provide a Peek at Storage, File System Usage with Global Site Survey

May 3, 2024

Curious how the market for distributed file systems, interconnects, and high-end storage is playing out in 2024? Then you might be interested in the market anal Read more…

Qubit Watch: Intel Process, IBM’s Heron, APS March Meeting, PsiQuantum Platform, QED-C on Logistics, FS Comparison

May 1, 2024

Intel has long argued that leveraging its semiconductor manufacturing prowess and use of quantum dot qubits will help Intel emerge as a leader in the race to de Read more…

Stanford HAI AI Index Report: Science and Medicine

April 29, 2024

While AI tools are incredibly useful in a variety of industries, they truly shine when applied to solving problems in scientific and medical discovery. Research Read more…

IBM Delivers Qiskit 1.0 and Best Practices for Transitioning to It

April 29, 2024

After spending much of its December Quantum Summit discussing forthcoming quantum software development kit Qiskit 1.0 — the first full version — IBM quietly Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Leading Solution Providers

Contributors

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

Intel Plans Falcon Shores 2 GPU Supercomputing Chip for 2026  

August 8, 2023

Intel is planning to onboard a new version of the Falcon Shores chip in 2026, which is code-named Falcon Shores 2. The new product was announced by CEO Pat Gel Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire