Getting Started on Your Next AI and HPC Project

January 8, 2024

The introduction of ChatGPT in late 2022 and its explosive adoption throughout 2023 lit a fire in the marketplace. Artificial intelligence (AI) became mainstream and a necessity. That, in turn, drove the need for compute capacity to support these new HPC and AI initiatives in organizations of all sizes across all industries.

The resulting challenge in meeting this high demand is gaining access to clusters that can deliver on the compute requirements for these apps, which are complex and require a mix of processor types, workload accelerators, and high-performance storage and interconnect technology not commonly found in enterprise IT. In many cases, on-premises solutions must also be tightly integrated with cloud services. Organizations must also consider if their IT departments are trained and prepared to deploy and manage such a new, complex environment.

Overprovisioning clusters is easy but comes at great expense. Installing systems that meet this demand are hard to optimize, scale, and manage on a day-to-day basis. As such, those undertaking such efforts must carefully plan for and address every step of the process – designing, building, deploying, and managing high-performance clusters for AI.

Issues to consider with scalable clusters for HPC and AI

One important point to note is that ideal clusters for new AI and HPC workloads are the first to combine GPU-based compute, InfiniBand networking, and high-speed storage. In the past, each of these elements was used at scale individually, but they were never brought together in large clusters. And that’s not easy.

The way to do so successfully is to look at four key aspects of any new AI or HPC at-scale project – specifically, the design, build, deployment, and management phases. Let’s look at the nuances of each phase:

Design: Businesses need specialized skills to design clusters that deliver the expected performance, security, stability, and scalability. Most businesses find they do not have the expertise or internal skills to do so, given that such clusters must incorporate elements (e.g., GPUs, accelerators, etc.) that in the past have not normally been part of enterprise IT systems. Another element to consider is the power and cooling requirements of HPC and AI clusters.

Build: Once the design is finalized, assembling a cluster requires additional unique skills. Businesses need expertise in cluster hardware integration. There is also a need for expertise in working with the software stack. The stack must be validated and optimized to reduce any compatibility issues. 

Deploy: Clusters built for AI and HPC often have demanding and complex power and cooling requirements compared to traditional IT systems, and these must be properly integrated and optimized when deploying new clusters. Additionally, networking aspects must match the need to move data between storage and compute elements, as well as between GPUs. And with many businesses using their own data to train AI models, security and data privacy/protection must be addressed. 

Manage: As with any cluster, those used for AI and HPC must be continuously managed so that they are highly available. Otherwise, critical workloads fail, and results are delayed. However, there are key differences with these clusters that compound management issues. First, AI and HPC workloads vary greatly from job to job and project to project. As a result, workload and performance optimization require persistent monitoring and adjustments. Second, AI and HPC clusters use special components that have unique failure signatures. Traditional tools might need to be modified to monitor and manage these elements properly.

Teaming with a technology partner

Businesses and organizations are in a perfect storm when it comes to meeting AI and HPC compute demands. They need systems that tightly integrate technologies (e.g., GPUs, high-speed interconnect, and high-speed storage) with each alone being unique and not widely used in enterprise IT. These systems also must be highly optimized to deliver the required performance in an economical manner. Such an undertaking requires special skills and expertise. 

One way to approach this is to hire people with the needed industry knowledge and train existing staff in these areas. Like many other IT endeavors, an alternative is to work with a partner who brings real-world experience in working with the new technologies to deliver a suitable system. 

These are all areas where Penguin Solutions can help. The company has long been known for its efficient HPC systems and proven record in designing and deploying cost-efficient HPC systems for extreme workloads. It has now applied the same strategies to AI.

To that end, Penguin Solutions is applying its 25 years of HPC experience to designing, building, deploying, and managing AI factories, the supercomputing clusters that run sophisticated AI workloads. As the factory name implies, the company’s work operationalizes the use of AI. Penguin applies best practices and leverages its strong and long-term relationship with GPU, networking, and storage partners to build highly efficient and scalable AI systems for companies like Meta, which are leading the use of AI for business.

Going a layer deeper, Penguin Solutions can help in each of the four major stages discussed above, including: 

Design: An engagement with Penguin starts by reviewing a project’s vision, evaluating where data will come from and how much data will be used, understanding the compute and storage requirements, and more. Penguin Solutions then uses proprietary software to design a system with the required performance, security, and scalability. 

Build: An approved cluster is then pre-configured and assembled before shipping and delivery. As part of that process, experts integrate all the components of the system. The in-factory work includes rack, cable, and burn-in testing. Additionally, the software stack is validated to avoid problems once the system is deployed.

Deploy: Pre-built clusters are delivered by Penguin Solutions, which provides on-site help to ensure the clusters are working properly. The company works with cooling and storage partners to verify operational performance. Once the system is physically in place, Penguin Solutions software, including Scyld ClusterWare and Scyld Cloud Workstation, is used to provision the solution. 

Manage: Penguin Solutions, a certified Nvidia DGX-Ready Managed Services provider, is one of the leading providers of AI factories with over 50,000 GPUs under management. But for most organizations, the complexity of supercomputers and cloud computing present serious budgetary and management challenges. That’s why Penguin Solutions has developed its own software for cluster management and for supporting hybrid infrastructure, combining on-premises, private cloud, and public cloud environments. 

A final word

Working with Penguin Solutions when starting or expanding AI and HPC efforts speeds up the time from concept to a working solution. And it frees existing staff to do other things, reducing the need to hire new staff with unique skills. 

Most importantly, the company’s proven AI factory solutions ensure optimized use of expensive compute resources, cost savings, and lower TCO.

If interested in learning, please visit our website or contact a Penguin Solutions HPC AI expert.

  

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

ISC 2024 Takeaways: Love for Top500, Extending HPC Systems, and Media Bashing

May 23, 2024

The ISC High Performance show is typically about time-to-science, but breakout sessions also focused on Europe's tech sovereignty, server infrastructure, storage, throughput, and new computing technologies. This round Read more…

HPC Pioneer Gordon Bell Passed Away

May 22, 2024

Legendary computer scientist Gordon Bell passed away last Friday at his home in Coronado, CA. He was 89. The New York Times has a nice tribute piece. A long-time pioneer with Digital Equipment Corp, he pushed hard for de Read more…

ISC 2024 — A Few Quantum Gems and Slides from a Packed QC Agenda

May 22, 2024

If you were looking for quantum computing content, ISC 2024 was a good place to be last week — there were around 20 quantum computing related sessions. QC even earned a slide in Kathy Yelick’s opening keynote — Bey Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Core42 Is Building Its 172 Million-core AI Supercomputer in Texas

May 20, 2024

UAE-based Core42 is building an AI supercomputer with 172 million cores which will become operational later this year. The system, Condor Galaxy 3, was announced earlier this year and will have 192 nodes with Cerebras Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's latest weapon in the AI battle with GPU maker Nvidia and clou Read more…

ISC 2024 Takeaways: Love for Top500, Extending HPC Systems, and Media Bashing

May 23, 2024

The ISC High Performance show is typically about time-to-science, but breakout sessions also focused on Europe's tech sovereignty, server infrastructure, storag Read more…

ISC 2024 — A Few Quantum Gems and Slides from a Packed QC Agenda

May 22, 2024

If you were looking for quantum computing content, ISC 2024 was a good place to be last week — there were around 20 quantum computing related sessions. QC eve Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Europe’s Race towards Quantum-HPC Integration and Quantum Advantage

May 16, 2024

What an interesting panel, Quantum Advantage — Where are We and What is Needed? While the panelists looked slightly weary — their’s was, after all, one of Read more…

The Future of AI in Science

May 15, 2024

AI is one of the most transformative and valuable scientific tools ever developed. By harnessing vast amounts of data and computational power, AI systems can un Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

ISC 2024 Keynote: High-precision Computing Will Be a Foundation for AI Models

May 15, 2024

Some scientific computing applications cannot sacrifice accuracy and will always require high-precision computing. Therefore, conventional high-performance c Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Leading Solution Providers

Contributors

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Intel Plans Falcon Shores 2 GPU Supercomputing Chip for 2026  

August 8, 2023

Intel is planning to onboard a new version of the Falcon Shores chip in 2026, which is code-named Falcon Shores 2. The new product was announced by CEO Pat Gel Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

How the Chip Industry is Helping a Battery Company

May 8, 2024

Chip companies, once seen as engineering pure plays, are now at the center of geopolitical intrigue. Chip manufacturing firms, especially TSMC and Intel, have b Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire