What does HPC have to do with cloud computing? Well, given that HPC environments are constantly growing, consume large quantities of fairly generic compute resources, and have both peaks and valleys in workload profiles, it would seem that HPC would be the perfect candidate for cloud computing, if only we could get past the barriers to adoption.
What I would like to do is present a series of blogs, intended to be a philosophical framing, not a technical roadmap, that will show why HPC is the perfect consumer of cloud computing. These blogs will be broken up into distinct topics in an attempt to create a logical progression aimed at having a common frame of reference. The initial set of blogs will address the barriers to adoption as follows:
1.Ego – IT as a core competency
2.Cost – getting more value for the same money
3.Trust – a historical lesson
4.Control – changes to organizational structure
5.Security – perspectives on internal security
6.Performance – realities of simultaneous optimization theory
Once we frame the barriers, we can then discuss incremental steps to get to value:
1.Cloud enablement – transforming your environment, internal private cloud
2.Private Clouds – external private clouds
3.Hybrid Configurations – leveraging public clouds for appropriate workloads
4.Public clouds – where and when they may make sense
This is the intended general direction, but I reserve the right to deviate based on input from the forum, any needed clarification, or recalibration necessary to stay true to intent of the site.
Having said that, let’s move into the first topic of discussion, IT as a core competency.
Companies need IT to be executed competently, and to control IT direction, but IT is not the primary product of the company (IT companies aside), and therefore should not be considered a core competency. We can debate tying into the primary function (core business) of the company as a criteria to determine core competency, but I believe it goes to the investment decision process of the company leadership. The primary drivers for the business revolve around delivering product to customers, development of new markets, and customer relationship management. When given the option of where to invest critical resources and assets in the business, executive management will be driven to primarily invest in the direction of the core business, and minimize expenses around all other aspects of the running of the business. Core competency would imply sufficient investment to differentiate the business from the rest of the world.
Further reinforcement of these concepts can be seen by looking at where IT is accounted for within the business. Quite commonly, IT is accounted as an SG&A function. This places it into the “overhead” bucket and it gets to compete with facilities, HR, accounting, purchasing, and all other groups that make up the SG&A bucket for the company in order to get resources. I only say this to frame the mindset of financial decisions. Given that companies are measured by how well they control expenses in SG&A (SG&A as a function of revenue), and that many of the components of the SG&A bucket are fixed or based on headcount, you then start to see that budgets for IT are scrutinized with a control oriented mindset, optimized on the cost variable. The R&D side is usually the “spend money to make money” side of the house, where SG&A is driven to control or even cut costs. Having said that, I have also not met anyone who can flip between these mindsets.
In order to control costs as much as possible and to get as much value out of what is spent in the area of IT, most companies take the approach to limit change and hire resources with a breadth of skills as compared to a depth of skills in a specific area. They will attempt to limit change in order to get maximum value out of existing assets, maximize the ability to automate, and to minimize the quantity of personnel required to manage. By limiting change like this though, it defeats the ability of technology to ultimately deliver maximized value. Also, by limiting change, the organization is really promoting a philosophy of maintenance instead of development, and in doing that, many times symptoms will be addressed (just patch it up) instead of the root cause.
Additionally, by hiring generalists, the business accomplishes many things, like having the ability to solve any problem in the environment while minimizing overhead staff in addition to having the ability to have fault tolerance in personnel resources (people can take vacations, get sick, or leave for another position). The downside is that many times these generalist resources are attempting to mange the infrastructure, but lack the experience on new technologies that are brought in to properly manage them(they have not have the opportunity to gain experience). Solutions that they develop or integrate are more prone to configuration or design mistakes (doing it for the first time), are many times less efficient solutions than what is possible (not optimized), and are not designed to scale into the future with technologies that are not yet available to solve problems that have yet to surface. And finally, the complexity of the environment is growing faster than the capacity of the organization.
This is not to say that internal IT organizations are not excellent, that the personnel are not very talented, or that these organization don’t bring great value to the companies they work for. The only point is that there is more value that could be achieved, and that the company does not (and should not) invest in this function like they do the core product(s) of the company. How many times have we all sat in on meetings listening vendors explain to us what the “perfect solution” is, and knowing that they are right because we thought of it a long time ago, but just have not had the time, funding, resources, and priority to go execute that “perfectly”. Cloud computing has the promise to grant us access to that optimized, “perfect” solution, and next time, we will talk about getting that solution for the same price we are paying for IT today…
Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.
Nvidia's DGX Read more…
Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…
Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…
Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…
During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…
Nvidia recently appointed Andy Grant as Director, Supercomputing, Higher Education, and AI for Europe, the Middle East, and Africa (EMEA). With over 25 years of high-performance computing (HPC) experience, Grant brings a Read more…
Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…
Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…
During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…
Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…
We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…
Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…
Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…
Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the fi Read more…
In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…
The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…
AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades.
AMD has claimed it Read more…
Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…
Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…
Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…
Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…
Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…
With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…
When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…
When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…
In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…
Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…
The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…
As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…
IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish.AcceptRejectRead More
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.