HP’s Long Haul Itanium Strategy

By By Michael Feldman

April 28, 2006

This past week at the Gelato Itanium Conference & Expo (ICE), attendees got an opportunity to hear about the latest developments in the world of Linux on Itanium. Prior to his keynote address at the conference, we spoke with Jerry Huck, HP Fellow with the company's server global business unit that produces the Itanium-based HP Integrity servers. Huck was one of the original developers of the architecture and now focuses his attention on moving the Itanium strategy forward as well as evangelizing HP's server offerings. We also talked to Ed Turkel, manager of the product and technology marketing group for HP's High-Performance Computing Division, who shared his perspectives about Itanium in the HPC marketplace.

System design is all about balance

One of the main thrusts of Huck's Gelato ICE keynote was how system designers deal with a variety of issues when building high performance systems. Balancing different aspects of performance with power requirements, as well as costs, makes the design of high-end systems a challenging endeavor.

According to Huck, anytime you move above the commodity system level, you encounter a set of non-linearities where higher levels of capacity, bandwidth, latency and reliability are achieved with higher cost components. The benefits of these are obvious, but for the HPC and mission-critical server market you need to play close attention to the optimal mix of capabilities. For the design engineer, it's often tempting to add capabilities beyond what the customer really needs.

“As system designers, we have to put in the right amount of those characteristics that meets the needs — without going overboard,” says Huck.

The current challenge of multi-core processors is another concern for the system designer. The amount of parallelism that can be provided by multiple cores on a chip must be carefully matched to the intended use.

“What is the right direction for providing the appropriate amount of capacity for large-scale systems,” asks Huck? “If we just continue to say we want to have at least as many sockets as we used to have, now you're challenged with 128 cores or 256 cores — machines that in the past were more like the exotic dedicated machines used by the high performance computing community. But it's not so easy for a standard business to take 256 cores and get it to work well on an Oracle database.”

Another looming issue for the system designer today is power and cooling. As hardware components shrink, systems become more powerful, but also more dense, leading to power and cooling problems. To address these problems, designers are being forced to think outside the rack.

“What's fueling this is that the price of cycles has been dropping,” says Huck. “We're always delivering lower power per unit of work over the years; it just hasn't been as good a slope as the performance curve. The amount of energy used by the CPUs as they get more integrated and become a bigger part of the system has become a larger fraction of the overall system. So as system designers we're seeing power dissipation becoming more of an issue. The other related challenge is that density is going up. The energy per cubic meter is driving in a direction that is making it a real challenge to cool these things.”

Huck admits that system designers are just able to keep up in this area. He says they haven't hit any kind of brick wall in terms of dealing with the heat, but it's not just a matter of running a little more air over the machines. You need colder air and you need to be more efficient with it. Water cooling is emerging as a viable strategy.

“We're starting to sell these half racks that sit on the side of very dense racks that allows us to run chilled water through the side racks to locally chill the air,” says Huck. “Now it's a question of the correct strategy to bring that water closer and closer to the systems. I mentioned to the last group I was talking to — engineering students — some of you should be taking plumbing classes.”

Ed Turkel agrees. “We've seen a big difference in procurements in HPC. Power and cooling has always been an issue, but it's really bubbled up to a much higher priority in recent bids, compared to the ones we've seen in the past. And again, I think it's because we're just capable of packing more into a smaller amount of space. As peoples' compute requirements grow, they just want more.”

So how do HP's Itanium-based systems balance all this? Huck says in order to meet the high throughput needs of commercial and HPC customers, HP's Integrity systems pushes towards higher levels capacities and performance — as compared to 64-bit x86 servers — while trying to keep power requirements in the middle of the spectrum.

“Time to completion is an important metric,” explains Huck. “Itanium systems, both from HP and others, generally are large capacity. Larger performance envelops — more cores, more sockets.”

As far as reliability goes, HP pushes the Integrity systems up towards the highest levels. Huck says their target customers are often running mission-critical work on these machines, so system crashes or computing the wrong answer is just unacceptable. There was a recent internal study at HP that asked the question: if you didn't have any parity or ECC in your memory, how often would you make a mistake? For most large-scale multi-gigabyte machines, the answer was “often.” So if you couldn't correct memory failures on your company's thousand-server cluster, you're going to end up crashing a lot — or worse, computing bad results and not knowing it.

The latest version of HP's Integrity SX2000 chipset has a feature called double chip kill. You can lose two parts in a DIMM and still compute around it. That level of reliability might be more than some HPC customers require, but it's certainly applicable to HP's mission-critical customers.

“Even within HPC, there's variation in the RAS kinds of functionality,” says Turkel. “There are differences in requirements between, for example, a 64-way node, like a Superdome, versus a two-socket system that's part of a larger cluster. A lot of what we're looking at for the HPC products is the right level of RAS features and some degree of configurability. For the Superdome we might configure it more lean and mean for an HPC application, versus what a bank would require.”

“On more aspect of where capacity comes into play is the amount of memory — how many gigabytes per socket per core per system can you configure into the system,” says Huck. “That's a place where HPC and commercial systems probably line up pretty well. Even something as mundane as TLB page sizes can come into play here. The Itanium architecture allows very large page sizes so that we can cover much more of this high-end memory than commodity-based systems, both at a page size level as well as total memory coverage.”

“And that page size has a real influence on application performance, for certain classes of applications — good examples being NASTRAN, CAE, Gaussian and Life Sciences,” adds Turkel.

Itanium takes on RISC

HP's original collaboration with Intel to develop the Itanium was predicated on the notion that it would replace HP's own 64-bit RISC architecture, PA-RISC, while leveraging Intel's chip making capabilities to achieve a greater economy of scale. The new architecture was a break from RISC and used an approach called Explicitly Parallel Instruction Computing, where multiple instructions are assembled by the compiler to execute in parallel.

“At the fundamental level, Itanium is really driving towards higher levels of instruction level parallelism,” explains Huck. “That's how it's different from the RISC architectures. It's trying to achieve more work per cycle than what you accomplish in a RISC architecture. It does it with less hardware — less built-in circuits for the purpose of trying to create parallelism. We have a couple of core features in the architecture — predication and speculation. These are the fundamental difference that separates the Itanium from RISC architectures.”

So for a given clock rate, you're doing more work and getting greater throughput. Another way of looking at that is that you don't need to have as high a clock rate to perform the same amount of work. But according to Huck, the architecture is just one component that determines microprocessor performance.

“I usually talk about performance as three components in a microprocessor,” he says. “One is the core architecture — the fundamental instruction set. The second thing is what IC process you're able to implement and how well you can design against that process. And third is the design team that's putting it all together. In the end, the customer sees the collection of all three of those.”

HP certainly hopes their customers like what they see. With the planned phase-out of PA-RISC and their large investment in the Integrity systems, the company is depending on the success of their Itanium strategy. With the dual-core Montecito coming out later this year, and with around 7000 applications available on the architecture, HP expects to be more competitive as time progresses.

“I think we see ourselves squarely competing with the RISC architecture in the commercial and high performance computing space,” says Huck. “And that market is quite large — about half the dollar volume of total servers. We expect to continue to take share away from Power and Sparc, even from our own PA-RISC and Alphas, with our Itanium-based systems. And the ability to differentiate from our smaller scale servers based on Opterons and Xeons is still quite viable. The roadmap show tremendous capability in the upcoming products. We're going to dual-core soon, and scaling up from there with clock rate as well as core count.”

“We often get a lot of question about application availability for Itanium,” notes Turkel. “When we show them our list of available applications, people tend to be surprised. If you look at areas like computer-aided engineering, we've got a very strong portfolio of applications on Itanium.”

But skepticism about the architecture abounds in the broader server community. Unlike HP, Sun and IBM continue to develop their own RISC microprocessors, rather than switching to Itanium. And after more than five years in field, Itanium still holds only a fraction of the high performance server market. Itanium's tentative debut in 2001, with the under-performing Merced chip, was a disappointment.

“There probably was an over-stated expectation,” admits Huck. “People were expecting it to overtake the world in two years and it didn't, especially in the higher end of the market, which moves more slowly. As new deployments and opportunities come up and as people start weighing the capabilities of Itanium as they replace their three, four, and five year old deployments they're going to see that it has a clear place in their data centers. It's just like the standard curve in technology adoption. We were too much in the hype side of the curve for awhile.”

“Specifically in HPC, there were some initial disappointments with early releases,” adds Turkel. “It didn't pan out as the next great thing, initially. But one of the things that people in HPC maybe don't notice is that it's been growing quite steadily. If you go to the IDC numbers that breakdown the market by processor type, you see a steady up-tick of the Itanium market, while the RISC market has been ramping down. And that's as we expected. There's a bigger volume in the x86 and x86-64 side, but that's expected as well since that's been growing up from the bottom.”

Turkel thinks InfiniBand technology adoption might provide a good analogy. “When it first came out people were very excited about it. But it didn't quite get into the marketplace with the kind of functionality that people expected. And so all of sudden, everyone was down of InfiniBand. But if you look closely at the ramp for people actually using it, it's extremely strong, right now.”

“In some sense it's a lot like the early Unix marketplace,” says Huck. “In the beginning, we didn't have much market coverage, but HP became the dominant Unix commercial vendor over MIPS and the other competitors out there. We slowly took the business away from the proprietary architectures of the time, moving them to the client-server model that Unix was supporting. It was slow but sure growth to eventual leadership and I think we'll see the same thing here.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire