Who’s Driving High Performance Computing?

By Michael Feldman

July 28, 2006

Gone are the days when the U.S. government alone can determine the direction of supercomputing. The commercial growth of HPC over the last two decades has fundamentally changed that dynamic. Adoption of high performance computing in bio-sciences, the financial sector, geo-sciences, engineering and other areas has changed the supercomputing user base in a relatively short period of time.

For many HPC vendors this is a good thing. IDC reports that revenues for the high performance computing market grew by 24 percent in 2005, reaching $9.2 billion. A majority of this revenue is commercial HPC, although the government still represents a significant share. Classified HPC defense spending alone is over a billion dollars.

But maybe more significantly, the vast majority of really high-end supercomputing and cutting-edge research is done with the support of government money. Most multi-million dollar HPC capability systems reside in government-funded supercomputing centers, federal research labs and various undisclosed locations at national security facilities. In the brave new world of commercial HPC, million-dollar-plus capability platforms are the exception. According to IDC, the revenue for these kinds of systems has actually been declining for several years, as less expensive machines have taken their place. Most of the rapid growth in HPC is the result of commodity-based cluster computing, which represented about half of the $9.2 billion in revenue in 2005.

But the U.S. government has some unique problems to solve. Extremely powerful supercomputers are required to support national security applications like nuclear weapons design and testing, cryptography, and aeronautics. Other commercial and scientific applications in areas such as applied physics, biotechnology/genomics, climatological modeling, and engineering can usually (but not always) make due with less-capable systems. Bleeding-edge commercial applications — for example, nanoscale simulation of drug interactions – are emerging, but most of these are being facilitated by government support.

A March 2006 report by the Joint U.S Defense Science Board and the UK Defence Scientific Advisory Council Task Force on Defense Critical Technologies concluded the following:

“Multiple studies, such as the recently completed [November 2004] National Research Council study, conclude that '…the supercomputing needs of the government will not be satisfied by systems developed to meet the demands of the broader commercial market.' The government must bear primary responsibility for ensuring that it has the access to the custom systems that it requires. While leveraging developments in the commercial computing marketplace will satisfy many needs, the government must routinely plan for developing what the commercial marketplace will not, and it must budget the necessary funds.”

Last week at a hearing before the Senate Subcommittee on Technology, Innovation and Competitiveness, several industry and government representatives offered testimony to address some of these issues.

One of the industry representatives to testify was Christopher Jehn, vice president of Government Programs of Cray Inc. He sounded that alarm that advances in HPC technology have slowed and that the promise of commodity-based supercomputers has not materialized. He attributes this to the fact that general-purpose processors and other commodity-based technologies used to build supercomputers were designed for other purposes — essentially, personal computing and enterprise computing. The result is that scientists must expend a lot of effort to get HPC software to run efficiently on these homogeneous commodity-based machines.

“Over the last decade, the computer industry has standardized on commodity processors,” observed Jehn. “With high volume low-cost processors, supercomputer clusters consisting of commodity parts held out a promise to users of ever-more powerful supercomputers at much lower cost. At the same time, the federal government dramatically reduced investments in supercomputing innovation, leaving the future of supercomputing in the hands of industry. But from industry's perspective, the supercomputing market is not large enough to justify significant investment in unique processor designs and custom interconnects — as the supercomputer market is less than two percent of the overall server marketplace, according to International Data Corporation. To advance supercomputing, industry has relied on leveraging innovation from the personal computer and server markets.”

This reflects Cray's “Adaptive Computing” pitch — to advance HPC in a meaningful way, we have to move from homogeneous systems to heterogeneous ones. Just scaling up the current architectures won't get us there. The implication is that the government needs to make a significant investment in technology beyond commodity-based computing.

Dr. Irving Wladawsky-Berger, vice president of Technical Strategy and Innovation, IBM, offered a different perspective. He warned that the government cannot afford to ignore market realities when funding HPC projects. During his career he witnessed the failure of supercomputing companies that relied solely on government-based projects and were heedless of marketplace requirements. He related IBM's success with its Blue Gene architecture as an example of leveraging commodity technology — in this case, PowerPC processors — to build cutting-edge systems.

“Supercomputing was once confined to a niche market, because the hardware was so very expensive,” stated  Wladawsky-Berger. “That changed over time with the introduction of workstation and PC-based technologies, the latter becoming immensely popular in Linux clusters during the late 1990s. Today, we even use low-power, low-cost micros — consumer-based technologies — to attain very high degrees of parallelism and performance, as in our Blue Gene system, which has reached a peak of 360 trillion calculations per second. Now, we are seeking to build supercomputers using technologies from the gaming world, such as the Cell processor. All these approaches leverage components from high-volume markets, and aggregate them using specialized architectures; thus the costs are significantly lower than in earlier days and the potential markets are consequently much bigger.”

From IBM's point of view, Blue Gene is an affirmation that commodity-based supercomputing is a practical model for the future and the government should pay attention to market viability as it looks to invest in new programs.

Dr. Joseph Lombardo, Director National Supercomputing Center for Energy and the Environment at the University of Nevada, Las Vegas, described some of the history of the U.S. government's past investments in high performance computing. He suggests that our federal HPC interests, academia and the larger HPC community are all intertwined and the government needs to act accordingly. He noted that after a brief period of interest in “Grand Challenge” applications in the late 1980's and early 1990's, the government switched its focus to distributed computing and COTS technology. He said while these initiatives led to a broader range of individuals working in scientific computing, it also resulted in starving the high-end of HPC R&D. But after the rise of Japanese supercomputing in the 1990's, the U.S. government once again refocused its efforts in high-end supercomputing

“At the end of the 1990's DARPA and other organizations began to see that foreign countries, such as Asian groups, were overtaking the U.S. position in high performance computing once again, and recommended policies that would fund and support the high end of the field once again,” said Lombardo. “The DARPA High Productivity Computing Systems program is a good example of this shift back toward an emphasis on high-end capability. The DARPA program is focused on providing a new generation of economically viable high productivity computing systems for the national security and industrial user community in the 2010 timeframe. This trend has continued with the High Performance Computing Revitalization Act, the President's 2006 state of the Union Address, and with the FY 07 budget which increased DOE's high performance computing programs by almost $100 million.”

Lombardo's comments suggest that we can balance the government's and industy's need for advanced supercomputing with market realities. He points to the DARPA HPCS program as an example of this approach.

But HPCS may also expose a potential conflict in the government's role. The program's stated goal of “providing a new generation of economically viable high productivity computing systems for national security and for the industrial user community” suggests that HPCS intends to address both the government's and industry's supercomputing needs, and do so within a commercially viable framework. The implication is that all these objectives are compatible.

But national security represents a rather specific set of very high-end supercomputing applications, while the industrial user community represents a very diverse range of HPC users. Can a single supercomputing model (or two) satisfy everyone? Even if we limit the industrial users to potential petascale customers like Boeing, I might still ask the same question.

And what is really meant by “commercially viable?” For supercomputing systems that push the envelope, commercial viability has always been problematic. Vendors usually don't expect such systems to make money straight out of the lab. I understand the desire to produce a general-purpose petascale solution, but I guess it makes me uncomfortable to think the government is going to try to predict the economic viability of a future architecture. After all, DARPA isn't a market research firm.

As commercial HPC continues to expand, the government will be increasingly challenged to control the direction of future supercomputing architectures. Market realities are pushing the hardware and software in a different direction than the needs of some critical high-end HPC users and will probably continue to do so. Such is the nature of capitalism, which, like processor scalability, has its limits. Market forces don't automatically produce optimal results. The government role in HPC, as in other areas, should be to support our national interests.

—–

To find out more about what took place at the Senate's Subcommittee on Technology, Innovation, and Competitiveness hearing on HPC, take a look at our feature article that describes these proceedings. To learn more about the evolving relationship between HPC, the government and business competitiveness, read this week's interview with Suzy Tichenor, Council on Competitiveness vice president, and Bob Graybill, senior Council advisor.

As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at [email protected].

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Supercomputer Research Reveals Star Cluster Born Outside Our Galaxy

July 11, 2020

The Milky Way is our galactic home, containing our solar system and continuing into a giant band of densely packed stars that stretches across clear night skies around the world – but, it turns out, not all of those st Read more…

By Oliver Peckham

Max Planck Society Begins Installation of Liquid-Cooled Supercomputer from Lenovo

July 9, 2020

Lenovo announced today that it is supplying a new high performance computer to the Max Planck Society, one of Germany's premier research organizations. Comprised of Intel Xeon processors and Nvidia A100 GPUs, and featuri Read more…

By Tiffany Trader

Xilinx Announces First Adaptive Computing Challenge

July 9, 2020

A new contest is challenging the computing world. Xilinx has announced the first Xilinx Adaptive Computing Challenge, a competition that will task developers and startups with finding creative workload acceleration solutions. Xilinx is running the Adaptive Computing Challenge in partnership with Hackster.io, a developing community... Read more…

By Staff report

Reviving Moore’s Law? LBNL Researchers See Promise in Heterostructure Oxides

July 9, 2020

The reality of Moore’s law’s decline is no longer doubted for good empirical reasons. That said, never say never. Recent work by Lawrence Berkeley National Laboratory researchers suggests heterostructure oxides may b Read more…

By John Russell

President’s Council Targets AI, Quantum, STEM; Recommends Spending Growth

July 9, 2020

Last week the President Council of Advisors on Science and Technology (PCAST) met (webinar) to review policy recommendations around three sub-committee reports: 1) Industries of the Future (IotF), chaired be Dario Gil (d Read more…

By John Russell

AWS Solution Channel

Best Practices for Running Computational Fluid Dynamics (CFD) Workloads on AWS

The scalable nature and variable demand of CFD workloads makes them well-suited for a cloud computing environment. Many of the AWS instance types, such as the compute family instance types, are designed to include support for this type of workload.  Read more…

Intel® HPC + AI Pavilion

Supercomputing the Pandemic: Scientific Community Tackles COVID-19 from Multiple Perspectives

Since their inception, supercomputers have taken on the biggest, most complex, and most data-intensive computing challenges—from confirming Einstein’s theories about gravitational waves to predicting the impacts of climate change. Read more…

Penguin Computing Brings Cascade Lake-AP to OCP Form Factor

July 7, 2020

Penguin Computing, a subsidiary of SMART Global Holdings, Inc., announced yesterday (July 6) a new Tundra server, Tundra AP, that is the first to implement the Intel Xeon Scalable 9200 series processors (codenamed Cascad Read more…

By Tiffany Trader

Max Planck Society Begins Installation of Liquid-Cooled Supercomputer from Lenovo

July 9, 2020

Lenovo announced today that it is supplying a new high performance computer to the Max Planck Society, one of Germany's premier research organizations. Comprise Read more…

By Tiffany Trader

President’s Council Targets AI, Quantum, STEM; Recommends Spending Growth

July 9, 2020

Last week the President Council of Advisors on Science and Technology (PCAST) met (webinar) to review policy recommendations around three sub-committee reports: Read more…

By John Russell

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

Q&A: HLRS’s Bastian Koller Tackles HPC and Industry in Germany and Europe

July 6, 2020

In this exclusive interview for HPCwire – sadly not face to face – Steve Conway, senior advisor for Hyperion Research, talks with Dr.-Ing Bastian Koller about the state of HPC and its collaboration with Industry in Europe. Koller is a familiar figure in HPC. He is the managing director at High Performance Computing Center Stuttgart (HLRS) and also serves... Read more…

By Steve Conway, Hyperion

OpenPOWER Reboot – New Director, New Silicon Partners, Leveraging Linux Foundation Connections

July 2, 2020

Earlier this week the OpenPOWER Foundation announced the contribution of IBM’s A21 Power processor core design to the open source community. Roughly this time Read more…

By John Russell

Hyperion Forecast – Headwinds in 2020 Won’t Stifle Cloud HPC Adoption or Arm’s Rise

June 30, 2020

The semiannual taking of HPC’s pulse by Hyperion Research – late fall at SC and early summer at ISC – is a much-watched indicator of things come. This yea Read more…

By John Russell

Racism and HPC: a Special Podcast

June 29, 2020

Promoting greater diversity in HPC is a much-discussed goal and ostensibly a long-sought goal in HPC. Yet it seems clear HPC is far from achieving this goal. Re Read more…

Top500 Trends: Movement on Top, but Record Low Turnover

June 25, 2020

The 55th installment of the Top500 list saw strong activity in the leadership segment with four new systems in the top ten and a crowning achievement from the f Read more…

By Tiffany Trader

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

Leading Solution Providers

Contributors

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This