The Low-Latency Imperative and Amazon’s New CCI for HPC

By Nicole Hemsoth

July 15, 2010

Today Purdue University’s Coates Cluster, which is ranked at the #103 spot on the TOP500 supercomputer roll, was declared to the first native 10Gb Ethernet cluster system to be ranked on the honor roll, which means, of course, that the cluster of clusters before this one have all been employing the mighty InfiniBand to sate their low-latency imperatives.

There is little room for questioning that the purist side of the high performance computing community sees InfiniBand as the gold standard. Shortly after my surprise following the announcement regarding Amazon’s new HPC-inspired Compute Cluster Instances, which have the power to place them at the equivalent of the #145 position on the TOP500 list, I figured that the word “InfiniBand” would follow—but it didn’t. Amazon instead went with 10GbE, a decision that has ruffled a few feathers because it is seen by some as being still inferior on low-latency front.

In an interview with HPCwire’s Michael Feldman, Deepak Singh, Business Development Manager at Amazon Web Services, responded to a question that many were asking after they’d a day to sit on Amazon’s news: why did they opt for a 10GbE network rather than InfiniBand, for instance?

Singh replied that Amazon looked to the customer base to understand what technology options were best-suited to their needs, saying, “we know that for HPC, microseconds matter. We specifically engineered Cluster Computer Instances with 10Gbps Ethernet bandwidth to give customers the low-latency network performance required for tightly-coupled, node-to-node communication. Cluster Compute Instances will provide more CPU than any other instance type and customers can expect to find the same performance provided by custom-built infrastructure but with the additional benefits of elasticity, flexibility and low per-hour pricing.”

When asked whether not they had plans to add InfiniBand networked clusters Singh stated that Amazon would “continue to evaluate all technologies as we receive customer feedback on the new instance type” which translates roughly into, no, not anytime soon, but we appreciate that you asked.

Amazon revealed a surprising amount of information for this new instance type, at least compared to their other releases which offered just enough information for users to have a rough idea—another big weakness in the EC2 option for running HPC-type applications. While they did share the hardware specs this time around, the specifics are still cloudy. For instance, when HPCwire asked about the configuration details (i.e., adapters, switches and so on) and for metrics on the node-to-node latency—or any latency information at all, Singh’s response was back to the EC2 generalities. He stated that Amazon “does not share details on the specifics of network implementation. What I can tell you is that the new Cluster Compute instances operate on a 10GbE network that provides full cross-sectional bandwidth to members of a cluster and very low latency.”

Gilad Shainer, Senior Director of HPC and Technical Computing at Mellanox Technologies, a company that is definitely an advocate of InfiniBand (although still caters to the 10GbE market), “Many of the HPC systems around the world are being built for maximum performance and efficiency—hence InfiniBand, GPUs, etc. People using HPC want to be able to run their simulations as fast as possible and as many as possible per day. Amazon’s new entry includes 10GigE for the I/O and incorporates the latest CPUs, but is currently limited in the amount of CPUs that users can utilize. I believe that Amazon will need to continue to improve their HPC cloud offering to include technology being used in most of today’s HPC systems to provide more compute resources per user.”

After the thrill of the news has worn off, people are taking a much closer look at not only the Linpack results that delivered Amazon’s virtual placement (it takes more than a test to get on the Top500—this was more of an exercise to demonstrate CCI’s capabilities) and the nature of this as a viable alternative to in-house HPC clusters. This delivers way more than standard EC2 and answers the concerns by many in the community that they just weren’t getting enough out of what was being offered.

I look forward to seeing how others rise to the challenge since it’s clear now that the HPC market must be important enough to cater to. If someone else ups the ante with InfiniBand and more CPU horsepower (via magic, of course) –what will this mean?

Would love to hear some thoughts on this issue. How important is the network or do the other drawbacks, even the capabilities provided by CCI still stand in the way? In short, is it not just the latency imperative?

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire