Picking the Right Trends

By Michael Feldman

February 23, 2007

Because high performance computing lives on the leading edge of information technology, predicting the path of HPC is like forecasting the future of the future. When Cray Research and CDC began selling supercomputers with custom processors in the early '70s, it probably seemed inconceivable that in three decades most high performance computing would be done on the descendants of PC chips. Only using the rear-view mirror of the present can we see that it was all inevitable. The economics of volume chip production, the introduction of cluster and grid computing, the momentum of a rapidly growing software base, and Moore's Law all conspired to propel the x86 into HPC preeminence. Everything else was just noise.

It's easy to identify the visible new trends today. In fact, they're generally the same in HPC as they are in the overall industry: the rise of multi-core and heterogeneous processing, the importance of power consumption, the industry embrace of open source software, virtualization — in all its forms, and the struggle for application parallelization. But which of these, if any, is just noise? And how will all these elements interact?

Predicting winning technology formulas is not just an exercise for the armchair geek. It's the intellectual focus of most IT organizations and informs their most basic business decisions. And while most companies end up just following trends to stay afloat, some actually set them for the rest of the industry. Intel and AMD fall into the latter category.

Even though the two x86 chipmakers are going after the same markets, their underlying technology strategies are diverging. Intel uses its in-house semiconductor and CPU design expertise to be the leader in x86 performance and power-efficiency. Its aggressive two-year cadence of processor shrinks and core redesigns is designed to stay ahead of its rivals on fundamental microprocessor technology. Meanwhile, AMD emphasizes system design to achieve scalability and overall system throughput. The company is also trying to establish an AMD-based ecosystem, using Torrenza and HyperTransport to foster open standards for third party silicon.

While these two chip titans are busy inventing the future, they also are effected by trends they can't control. Late last year, AMD made the biggest strategic decision of its life when it acquired ATI. It saw the future of general-purpose processing as something more than the x86. The company's CPU-GPU Fusion initiative and the ongoing development of discrete GPUs is AMD's way to bring heterogeneous processing in-house. Rumors abound that Intel is working on adding high-end GPUs to its offerings as well. Publicly the chipmaker has been mum on the subject, but the Intel web page that lists job openings for graphics engineers (http://www.intel.com/jobs/careers/visualcomputing/) provides a pretty good indication of the company's intent.

In this week's issue, Intel and AMD offer an outline of their high performance computing strategy — at least the public strategy. Stephen Wheat, senior director of Intel's HPC Business Unit, talks about x86 high performance computing, and how the company's overall strategy fits into that market. Phil Hester, AMD CTO and Bob Drebin, CTO for the AMD's Graphics Products Group, answer questions about how their company's technology roadmap targets future HPC workloads.

What may be most similar about the two companies is their measured devotion to high performance computing. Both organizations have internal HPC units, but these entities have only limit effect on driving overall company strategy. That makes good business sense. With an x86 market nearing $30 billion annually (Mercury Research, 2006), the HPC slice represents just a fraction of that; the entire HPC market is around $10 billion, according to IDC. While high performance computing is important to both companies, it's treated as a leverage poiint for the larger business rather than as an end-point in itself.

“[W]e rarely look at the HPC segment in isolation,” said Intel's Stephen Wheat. “HPC innovation quickly migrates into the enterprise segment. There are many opportunities for HPC to influence offerings in the larger markets.”

The realities of commodity-based HPC are intimately tied to the mega-trend of multi-core processors. This architectural shift means that parallel processing is not just for HPC anymore. All the chipmakers, not just Intel and AMD, are counting on this. In fact, multi-core processing is going to blur the distinction between general purpose and high performance computing. It may be the most profound development in computer hardware since the integrated circuit.

The February edition of CTWatch Quarterly (http://www.ctwatch.org/quarterly/) has devoted the entire issue to the multi-core revolution. It traces the rationale behind the revolution, describes its impact, and outlines the problems this new architecture has created for computing in the 21st century. The four articles in the issue include: The Impact of Multicore on Computational Science Software, The Many-Core Inflection Point for Mass Market Computer Systems, The Role of Multicore Processors in the Evolution of General-Purpose Computing, and High Performance Computing and the Implications of Multi-core Architectures. All are worth reading if you want to understand the paradigm that is shifting beneath your feet.
 
Pushback on Programming

Apparently my commentary a couple of weeks back, HPC Programming for the Masses, struck a nerve. Professor Marc Snir, head of the CS department of at the University of Illinois Urbana Champaign-Urbana, took exception to my perspective on the relative importance of different programming language models for HPC. The view I put forth was that HPC-enabled versions of domain specific languages such as MATLAB, Excel and SQL will be more important than traditional third generation languages in spreading the commercial use of HPC, since it will broaden the developer base beyond computer scientists.

Snir's point of view is that we should leave programming to the professionals — i.e., software engineers. To be honest, he's in good company. Bjarne Stroustrup, the inventor of C++, expressed the same sentiments in a recent interview for Technology Review . However, Snir also implies that I believe higher level languages will make software engineers redundant. Actually, I never suggested that and certainly don't believe it. As I pointed out in my commentary, most of the domain specific and 4th generation languages are built on 3rd generation technology developed by the programming elite.

Snir does makes some interesting observations about the PGAS languages and the HPCS effort. In the process, the professor also gives us a treatise on an implementation language for HPC. This alone is worth a read.

Oddly enough, Snir circles back around to recognize that application specific languages do represent an important paradigm for HPC.

“High-level languages should match the application domain, not the architecture of the compute platform,” he says. “Developing high-level languages that satisfy the needs of HPC but are less convenient to use on more modest platforms is a waste of money.”

At that point, I'm not sure which side he's really arguing for. Read the article and decide for yourself.

—–

As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at [email protected].

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Empowering High-Performance Computing for Artificial Intelligence

April 19, 2024

Artificial intelligence (AI) presents some of the most challenging demands in information technology, especially concerning computing power and data movement. As a result of these challenges, high-performance computing Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

2024 Winter Classic: The Return of Team Fayetteville

April 18, 2024

Hailing from Fayetteville, NC, Fayetteville State University stayed under the radar in their first Winter Classic competition in 2022. Solid students for sure, but not a lot of HPC experience. All good. They didn’t Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use of Rigetti’s Novera 9-qubit QPU. The approach by a quantum Read more…

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire