Beyond Speeds and Feeds

By By Geoffrey James

July 13, 2009

High Performance Computing (HPC) was once limited to a select group of laboratories where scientists or engineers solved complex problems on huge mainframe “supercomputers” that cost millions of dollars to buy and maintain. Today the drop in the price of computer power has combined with new architectures for clustering to bring HPC to a wide range of applications inside a growing number of industries at a reasonable price.

“Computer power is the raw fuel for business innovation,” explains Dr. Jeff Layton, enterprise technologist for HPC at Dell Inc. “Making HPC available to a wider range of customers, and making it more cost-effective, will have a long-term effect, not just on productivity but also on the ability of companies to thrive, not only during difficult economic times but also for many years to come.”

Along with this democratization of HPC has come a growing understanding, among pundits and executives alike, that the traditional way of measuring HPC—the raw performance of a single CPU—seems out of date. As the computer industry leaps to more-complex computing environments, it has become clear that HPC performance must be redefined in order to encapsulate the wider business case, according to Scot Schultz, AMD’s senior strategic alliance manager for HPC.

“What’s important is not how fast the CPU can run a test suite but how effectively it can solve a real-life problem,” Schultz says.

Productivity Now Trumps Raw Performance
More and more analysts, OEMs and IT executives have come to understand that raw performance is less important than how the underlying architecture makes end users more productive. “The performance that’s actually delivered to end users is highly dependent on the chip architecture and how well the software can take advantage of it,” explains Layton.

IT managers who make HPC buying decisions based purely on those obsolete measurements risk getting less bang for their buck, according to John Spooner, an analyst at the market research firm Technology Business Research (TBR). “There are always going to be customers who want all-out performance and don’t care about anything else,” he admits, “but many companies are now embracing the idea that the greatest business value comes not from raw performance but from getting the maximum performance for your overall IT dollar.”

Companies that adopt HPC are typically less interested in “speed and feeds” than in creating a long-term competitive advantage. A case in point is the sport department of Ferrari, one of the first companies to test Microsoft’s Windows HPC Server 2008.

“Ferrari is always looking for the most-advanced technological solutions, and the same goes for software and engineering,” says Piergiorgio Grossi, head of information systems at Ferrari. Like many other companies embracing HPC today, Ferrari is using it widely across the corporation—“for our users, engineers and administrators,” Grossi says.

Companies need to be thinking about productivity as a performance measurement, according to Vince Mendillo, director of marketing for the HPC business group at Microsoft. “HPC is expanding into vertical markets, ranging from engineering to aerospace to energy and many other industries,” he explains. “Ultimately, HPC is about helping customers get the job done.”

Measuring Productivity
HPC has traditionally been measured in terms of the raw computing power of a single core on a single CPU. Using that primitive metric, the battle for “market leadership” has been primarily between the two leading CPU firms: AMD and Intel, according to Rob Enderle of the Enderle Group. “For decades, these two companies have traded positions as the ‘industry leader’ when it comes to raw performance figures,” he says.

It’s a contest that’s likely to continue for the foreseeable future, according to Ken Cayton, research manager for enterprise platforms at the market research firm IDC. “Both companies are constantly moving forward, so one would expect to see the same kind of leapfrog behavior we’ve seen so frequently in the past,” he says.

However, IT executives need to be aware that the traditional “speeds and feeds” measurement is largely irrelevant in a world in which HPC takes place on CPU chips that contain multiple cores, which are, in turn, harnessed into clusters. In a multiprocessing environment, other metrics such as power efficiency start becoming more important, according to TBR’s Spooner. “Because energy costs are such a big proportion of the expense of running a large data center, businesses now want to maximize the amount of work they get done for each unit of electricity they pay for,” he explains.

Indeed, some companies are finding that the hard limitation of their HPC computing isn’t raw performance but the amount of electricity they can get piped into their data center. Cayton relates an experience he recently had with a Manhattan firm that is doing financial analysis but has only a limited amount of power coming into the building. “It therefore is more concerned with how effectively its HPC system uses power than it is about how quickly one element of the system can perform calculations,” Cayton explains.

Architecture and Performance
With multiprocessing and clustering, the speed of an individual core is often far less important than the ability to move data around between the various chips, explains Jordan Selburn, principal analyst at the market research firm iSuppli.

“In a lot of areas and applications, raw horsepower isn’t a significant factor, because other standards drive the degree of speed needed and anything excess is just that: excess,” Selburn explains. “The key in HPC applications is how efficiently you can perform the needed function.”

And that efficiency is intimately tied to the underlying architecture of the CPU chip, according to Einar Rustad, vice president of business development at Numascale, a company that makes chip sets that link multiple CPUs into HPC clusters. “The challenge with multiprocessing is keeping everything in sync, which means that each CPU must have swift access to the data that’s been processed by the other CPUs,” he explains.

To accomplish this, the cluster must be able to move data around quickly, something the HyperTransport™ architecture that AMD uses makes relatively easy. “With other chip architectures, you have to move data around by using the front-side bus, which is not only ungainly from an electronics viewpoint but also incurs a lot of overhead and prevents a true shared memory architecture with cache coherence,” says Rustad. “AMD’s HyperTransport technology, by contrast, makes it easier to connect CPUs together in a way that enables programmers to address the combined memory space and to benefit from the aggregated memory bandwidth.”

One benefit of directly connecting the chips is a potential decrease in data latency, which means that each CPU in the cluster will spend less time idling and more time actually processing data, according to Gilad Shainer, director of technical marketing for Mellanox Technologies, a leading supplier of semiconductor-based server and storage interconnect products.

“AMD has a good vision of how HPC should be handled,” Shainer says. “Its technological architecture provides value for many applications and end users, which is why we’re happy to collaborate with it to build the kind of balanced systems that companies want to buy.”

Real-Life Productivity
A chip architecture that handles data more efficiently can also make life easier for HPC programmers—an important issue in IT groups that may have limited access to top programming talent.

“One of the big limitations in HPC is adapting programs to run in parallel,” says Dell’s Layton. “The computer industry has been struggling for years with limitations on memory bandwidth per core, but that’s finally beginning to ease up, largely as the result of improvements in basic CPU architecture.”

Because programming for HPC is becoming easier, it’s beginning to show up in more industries and application areas. And that, in turn, has further lessened the importance of raw computing power as the primary HPC benchmark, because every industry has different requirements when it comes to the type of computing power that’s applicable to that industry. For example, financial HPC applications make extensive use of floating point, an area in which AMD’s architecture has a “slight edge” over other architectures, according to Christian Heidarson, an analyst at the market research firm Gartner.

HPC-friendly chip architecture can also make a future upgrade path easier. “Because HPC applications tend to be complex, companies are leery of pulling out their current systems and replacing them with new ones,” explains Layton, who notes that AMD has been designing CPU architectures that are socket-compatible, making it possible to upgrade a system without reloading and reconfiguring the software. The only change to the system that’s required is a BIOS upgrade, which takes a few minutes as opposed to the hours or days it might take to completely reconstruct a clustered system. “This makes it possible for a company to upgrade while limiting the downtime and cost risks inherent in re-creating and reinitializing the entire cluster,” says Layton.

In short, the raw performance of a single CPU may not be the best measurement of HPC. Rather, a metric such as the total cost of ownership (TCO) can provide a better baseline by which to judge systems and their underlying chip architecture.

“It’s a big change from the way people are used to thinking about HPC,” says AMD’s Schultz. “However, focusing on productivity means that companies can purchase their computer power more wisely and get the most benefit from their IT dollars.

For more on HPC solutions based on AMD Opteron™ processors go to www.amd.com/istanbulsolutions.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

TACC Helps ROSIE Bioscience Gateway Expand its Impact

April 26, 2017

Biomolecule structure prediction has long been challenging not least because the relevant software and workflows often require high end HPC systems that many bioscience researchers lack easy access to. Read more…

By John Russell

Messina Update: The U.S. Path to Exascale in 16 Slides

April 26, 2017

Paul Messina, director of the U.S. Exascale Computing Project, provided a wide-ranging review of ECP’s evolving plans last week at the HPC User Forum. Read more…

By John Russell

IBM, Nvidia, Stone Ridge Claim Gas & Oil Simulation Record

April 25, 2017

IBM, Nvidia, and Stone Ridge Technology today reported setting the performance record for a “billion cell” oil and gas reservoir simulation. Read more…

By John Russell

ASC17 Makes Splash at Wuxi Supercomputing Center

April 24, 2017

A record-breaking twenty student teams plus scores of company representatives, media professionals, staff and student volunteers transformed a formerly empty hall inside the Wuxi Supercomputing Center into a bustling hub of HPC activity, kicking off day one of 2017 Asia Student Supercomputer Challenge (ASC17). Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

Remote Visualization Optimizing Life Sciences Operations and Care Delivery

As patients continually demand a better quality of care and increasingly complex workloads challenge healthcare organizations to innovate, investing in the right technologies is key to ensuring growth and success. Read more…

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of a new generation of chips designed specifically for deep learning workloads. Read more…

By Alex Woodie

Musk’s Latest Startup Eyes Brain-Computer Links

April 21, 2017

Elon Musk, the auto and space entrepreneur and severe critic of artificial intelligence, is forming a new venture that reportedly will seek to develop an interface between the human brain and computers. Read more…

By George Leopold

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Engine (GCE) job. Sutherland ran the massive mathematics workload on 220,000 GCE cores using preemptible virtual machine instances. Read more…

By Tiffany Trader

NERSC Cori Shows the World How Many-Cores for the Masses Works

April 21, 2017

As its mission, the high performance computing center for the U.S. Department of Energy Office of Science, NERSC (the National Energy Research Supercomputer Center), supports a broad spectrum of forefront scientific research across diverse areas that includes climate, material science, chemistry, fusion energy, high-energy physics and many others. Read more…

By Rob Farber

Messina Update: The U.S. Path to Exascale in 16 Slides

April 26, 2017

Paul Messina, director of the U.S. Exascale Computing Project, provided a wide-ranging review of ECP’s evolving plans last week at the HPC User Forum. Read more…

By John Russell

ASC17 Makes Splash at Wuxi Supercomputing Center

April 24, 2017

A record-breaking twenty student teams plus scores of company representatives, media professionals, staff and student volunteers transformed a formerly empty hall inside the Wuxi Supercomputing Center into a bustling hub of HPC activity, kicking off day one of 2017 Asia Student Supercomputer Challenge (ASC17). Read more…

By Tiffany Trader

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of a new generation of chips designed specifically for deep learning workloads. Read more…

By Alex Woodie

NERSC Cori Shows the World How Many-Cores for the Masses Works

April 21, 2017

As its mission, the high performance computing center for the U.S. Department of Energy Office of Science, NERSC (the National Energy Research Supercomputer Center), supports a broad spectrum of forefront scientific research across diverse areas that includes climate, material science, chemistry, fusion energy, high-energy physics and many others. Read more…

By Rob Farber

Hyperion (IDC) Paints a Bullish Picture of HPC Future

April 20, 2017

Hyperion Research – formerly IDC’s HPC group – yesterday painted a fascinating and complicated portrait of the HPC community’s health and prospects at the HPC User Forum held in Albuquerque, NM. HPC sales are up and growing ($22 billion, all HPC segments, 2016). Read more…

By John Russell

Knights Landing Processor with Omni-Path Makes Cloud Debut

April 18, 2017

HPC cloud specialist Rescale is partnering with Intel and HPC resource provider R Systems to offer first-ever cloud access to Xeon Phi "Knights Landing" processors. The infrastructure is based on the 68-core Intel Knights Landing processor with integrated Omni-Path fabric (the 7250F Xeon Phi). Read more…

By Tiffany Trader

CERN openlab Explores New CPU/FPGA Processing Solutions

April 14, 2017

Through a CERN openlab project known as the ‘High-Throughput Computing Collaboration,’ researchers are investigating the use of various Intel technologies in data filtering and data acquisition systems. Read more…

By Linda Barney

DOE Supercomputer Achieves Record 45-Qubit Quantum Simulation

April 13, 2017

In order to simulate larger and larger quantum systems and usher in an age of “quantum supremacy,” researchers are stretching the limits of today’s most advanced supercomputers. Read more…

By Tiffany Trader

Google Pulls Back the Covers on Its First Machine Learning Chip

April 6, 2017

This week Google released a report detailing the design and performance characteristics of the Tensor Processing Unit (TPU), its custom ASIC for the inference phase of neural networks (NN). Read more…

By Tiffany Trader

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the campaign. Read more…

By John Russell

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Leading Solution Providers

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

Facebook Open Sources Caffe2; Nvidia, Intel Rush to Optimize

April 18, 2017

From its F8 developer conference in San Jose, Calif., today, Facebook announced Caffe2, a new open-source, cross-platform framework for deep learning. Caffe2 is the successor to Caffe, the deep learning framework developed by Berkeley AI Research and community contributors. Read more…

By Tiffany Trader

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This