HPC User Forum Wrap-Up

By John E. West

October 12, 2007

Industry research group IDC hosts five to six User Forum meetings around the world each year. About 100 people participated in the most recent meeting, representing government, industry and academia, as well as all the major HPC vendors. Each User Forum has a theme; this one focused on the use of HPC in the energy industry.

What follows is a subjective selection of highlights and topics of potential general interest from this meeting, which took place in Santa Fe, N.M. on Sept. 26-27.

The Keynote

The meeting keynote was delivered by Victor Reis, Senior Advisor, Office of the Secretary, Department of Energy. He has primary responsibility for the Global Nuclear Energy Partnership, part of President George W. Bush’s Advanced Energy Initiative, and he is also a member Strategic Advisory Group of the U.S. Strategic Command. Reis was the Director, Defense Research and Engineering when the DoD’s High Performance Modernization Program started, and he was a senior official at DOE when it began the ASCI (Accelerated Strategic Computing Initiative) program.

Reis reviewed the history of ASCI and what it has accomplished to date, and then discussed a potential new DOE program involving physics-based design of nuclear reactors for peaceful energy production. He feels that the timing is correct for instituting a new HPC program for this purpose and is gathering information to support such a program. He mentioned several potential modeling efforts that would contribute to the program, such as optimization of the nuclear reactor fuel cycle, design and qualification of new nuclear fuels, detailed modeling of new reactor designs, and environmental effects on nuclear reactors, particularly earthquakes. Several DOE talks followed which discussed modeling of fission reactors and the status of nuclear fusion research.

Energy-Related Discussions

The theme of this meeting was HPC in energy, so naturally there were several discussions of advanced energy research in addition to coverage in the keynote.

Keith Gray of BP discussed their seismic imaging research and development, which is designed to improve the information content of seismic images by processing with HPC capabilities. He specified several basic computational challenges and requirements: large-memory nodes for development work, easier parallel tools, effective use of emerging multicore systems, and bigger and better file systems.

Mark Nimlos of the National Renewable Energy Laboratory discussed the status of various forms of alternative energy sources and concentrated on his work in the biofuels program, which has a goal of replacing 30 percent of current transportation fuels with biofuels by 2030. He is carrying out sophisticated molecular dynamics computations  of how one of the key enzymes breaks down cellulose into sugars, with the intent of understanding how to optimize the process.

Pratul Agarwal of Oak Ridge National Laboratory, working in the same overall program, discussed the multiscale nature of biofuel processing and the need for collaborative efforts between experimental and computational work. His group is considering the use of new HPC technologies such as FPGAs and GPUs to accelerate the computation of the enzymatic pathways involved in the conversions of cellulose to sugars. He noted that the follow-on processing of sugars to alcohols (fermentation) was well understood, at least at the production level, because of the many thousands of years of experimentation by human beings in this process.

HPC Acquisition and Architectures

In addition to the domain-focused fare there were also several discussions of recent HPC acquisitions and new HPC architectures.

Rupak Biswas of NASA-Ames discussed their Columbia system and efforts underway to procure a replacement for it. He provided information on NASA’s HPC requirements as part of the discussion, highlighting growth of those requirements across several NASA directorates.

Richard Walsh of IDC, and formerly of the Army High Performance Computing Research Center (AHPCRC), provided his taxonomy of processor architectures and applications, stating that there were significant drivers toward heterogeneous processors in the near future.

John Daly of Los Alamos National Laboratory (LANL) discussed issues of running applications at large scale, including how to handle interrupts and how often to write out checkpoint/restart files. He showed data that indicated that even small jobs on really big systems might be at significant risk of an interrupt because of the dependence of mean-time-between-application-interrupt on numbers of job processors; instead of a linear dependence, the low-number-of-processor behavior of this interrupt time is considerably less than linear. This leads to alternative scheduling policies that emphasize large-number-of-processor jobs at the expense of long running times.

John Gustafson of ClearSpeed Technology provided some impressive speed-ups on a variety of application codes with their accelerator technology and also presented a rule of thumb estimate of current HPC system power per volume. According to him, this turns out to be about 70 watts per liter.

Finally, your very own John West gave a talk discussing what HPC systems will look like approximately ten years in the future. After talking to multiple industry sources, I envision a multicore future with general-purpose HPC systems of the 2017 era comprised of chips containing hundreds of computational cores per chip (not thousands), among other interesting features. This presentation was, of course, fascinating.

University Panel

One of the panels at the forum involved representatives from several universities involved in HPC and university-affiliated computer centers: Penn State, Ohio Supercomputer Center, Pittsburgh Supercomputing Center, San Diego Supercomputer Center, the University of Minnesota, the University of Nevada at Las Vegas, the University of Tennessee, Utah State University, and Virginia Tech.

There was extensive discussion of the role of university computing centers versus NSF national computing centers. As expected, the university computing centers would like NSF support for their niche in the overall structure. Many of these universities offer unique degree programs in computational science. One suggestion arising from the discussion was to develop a partnership among the federal agencies (NSF, DoD and DOE) to promote and support these educational programs in computational science.

Data Intensive Computing Environment (DICE)

Roger Panton of Avetech, the executive director of the Data Intensive Computing Environment (DICE) program, provided the history and status of that program. DICE was motivated by the HEC/RTF report of several years ago and is a partnership among DoD, NASA and DOE. The goal of the program is to set up a testbed to evaluate data management technologies that could improve data accessibility over geographically distributed sites. Current organization partners include Advanced Simulation and Computing (ASC), Ohio Supercomputer Center (OSC), NASA-Goddard, and Avetech; the High Performance Computing Modernization Program (HPCMP) participates through ASC. New sites will include the Pacific Northwest National Laboratory and Sandia National Laboratories. Roger discussed future partners and projects.

The next two U.S. meetings of the HPC User Forum are scheduled for April 14-16, 2008 in Norfolk, Va., and Sept. 8-10, 2008 in Tucson, Ariz. You can find out more about these events by clicking over to http://hpcuserforum.com.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even in the U.S. (which has a reasonably fast average broadband Read more…

By Oliver Peckham

Hats Over Hearts: Remembering Rich Brueckner

May 26, 2020

It is with great sadness that we announce the death of Rich Brueckner. His passing is an unexpected and enormous blow to both his family and our HPC family. Rich was born in Milwaukee, Wisconsin on April 12, 1962. His Read more…

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the dominant primate species, with the neanderthals disappearing b Read more…

By Oliver Peckham

Discovering Alternative Solar Panel Materials with Supercomputing

May 23, 2020

Solar power is quickly growing in the world’s energy mix, but silicon – a crucial material in the construction of photovoltaic solar panels – remains expensive, hindering solar’s expansion and competitiveness wit Read more…

By Oliver Peckham

Nvidia Q1 Earnings Top Expectations, Datacenter Revenue Breaks $1B

May 22, 2020

Nvidia’s seemingly endless roll continued in the first quarter with the company announcing blockbuster earnings that exceeded Wall Street expectations. Nvidia said revenues for the period ended April 26 were up 39 perc Read more…

By Doug Black

AWS Solution Channel

Computational Fluid Dynamics on AWS

Over the past 30 years Computational Fluid Dynamics (CFD) has grown to become a key part of many engineering design processes. From aircraft design to modelling the blood flow in our bodies, the ability to understand the behaviour of fluids has enabled countless innovations and improved the time to market for many products. Read more…

TACC Supercomputers Delve into COVID-19’s Spike Protein

May 22, 2020

If you’ve been following COVID-19 research, by now, you’ve probably heard of the spike protein (or S-protein). The spike protein – which gives COVID-19 its namesake crown-like shape – is the virus’ crowbar into Read more…

By Oliver Peckham

Microsoft’s Massive AI Supercomputer on Azure: 285k CPU Cores, 10k GPUs

May 20, 2020

Microsoft has unveiled a supercomputing monster – among the world’s five most powerful, according to the company – aimed at what is known in scientific an Read more…

By Doug Black

HPC in Life Sciences 2020 Part 1: Rise of AMD, Data Management’s Wild West, More 

May 20, 2020

Given the disruption caused by the COVID-19 pandemic and the massive enlistment of major HPC resources to fight the pandemic, it is especially appropriate to re Read more…

By John Russell

AMD Epyc Rome Picked for New Nvidia DGX, but HGX Preserves Intel Option

May 19, 2020

AMD continues to make inroads into the datacenter with its second-generation Epyc "Rome" processor, which last week scored a win with Nvidia's announcement that Read more…

By Tiffany Trader

Hacking Streak Forces European Supercomputers Offline in Midst of COVID-19 Research Effort

May 18, 2020

This week, a number of European supercomputers discovered intrusive malware hosted on their systems. Now, in the midst of a massive supercomputing research effo Read more…

By Oliver Peckham

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

Wafer-Scale Engine AI Supercomputer Is Fighting COVID-19

May 13, 2020

Seemingly every supercomputer in the world is allied in the fight against the coronavirus pandemic – but not many of them are fresh out of the box. Cerebras S Read more…

By Oliver Peckham

Startup MemVerge on Memory-centric Mission

May 12, 2020

Memory situated at the center of the computing universe, replacing processing, has long been envisioned as instrumental to radically improved datacenter systems Read more…

By Doug Black

In Australia, HPC Illuminates the Early Universe

May 11, 2020

Many billions of years ago, the universe was a swirling pool of gas. Unraveling the story of how we got from there to here isn’t an easy task, with many simul Read more…

By Oliver Peckham

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Contributors

Tech Conferences Are Being Canceled Due to Coronavirus

March 3, 2020

Several conferences scheduled to take place in the coming weeks, including Nvidia’s GPU Technology Conference (GTC) and the Strata Data + AI conference, have Read more…

By Alex Woodie

Exascale Watch: El Capitan Will Use AMD CPUs & GPUs to Reach 2 Exaflops

March 4, 2020

HPE and its collaborators reported today that El Capitan, the forthcoming exascale supercomputer to be sited at Lawrence Livermore National Laboratory and serve Read more…

By John Russell

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This