Perspectives on HPC and the SC Series of Conferences

By Tiffany Trader (HPC)

November 12, 2008

Once a year, the leading experts from the world of high performance computing gather at SC to assess the current state of HPC and to look ahead to what the future holds. These are the people creating the technologies that will shape our lives. This year, as the conference celebrates an amazing 20 years, several industry thought leaders and long-time attendees reflect on what is most important to them.

Dan Reed, Director of Scalable and Multicore Computing Strategy at Microsoft

Dan Reed

The SC conference continues to grow in scale, scope and variety, with a diverse set of workshops, plenary speakers, technical program sessions and, of course, the massive exhibit floor. In addition to the public program, there are a seemingly endless series of sidebar meetings and lots of technical socializing. Take advantage of the fact that you can talk to almost anyone connected to highperformance computing during the conference, but remember that you can overdo it and never be seen at any of the official venues!

Undoubtedly, one of the great hallway discussion topics will be the effect of the economic downturn on HPC research, infrastructure acquisitions and vendor finances. It is quite possible that some startups and smaller companies may not survive. For those in the U.S., the Presidential transition and the implications for research funding will also be hot topics.

Finally, I suspect two other discussions will center on the relationship between academic Grids and commercial clouds and the relationship between trans-petascale (exascale) options and the design of extremely large data centers. The latter is deeply connected to ecofriendly computing system design and energy efficiency. Answers these questions will affect the future of large-scale computing, our research investments, user communities and the types of applications we can support efficiently.

Remember — bring your running shoes. Your feet will thank you later.

Pete Ungaro, President and CEO of Cray Inc.

Pete Ungarro

Without a doubt, SC is the most important conference of the year for our community. This year marks the 20th anniversary of the conference but even more interesting is that this year’s event will officially kick off the start of a new era in HPC — the petascale era.

There will be a lot going on at the conference around petascale computing — vendors and customers highlighting their capabilities as well as end-users contemplating what will now be possible with this new-found power. A related and important theme will be green computing, especially how can petascale systems be built in a way that minimizes their impact on the environment. Cray, of course, is no exception — we are very excited about bringing petascale computing to our customers through our scalable system designs and innovative power and cooling technologies. We believe that the petascale era promises to enable significant technological breakthroughs as scientists and engineers are able to tackle larger problems with higher fidelity.

Be sure to take a few minutes to stop by our booth to see how we’re tackling the petascale challenge as well as bringing Cray supercomputing technology to individual users. Have a great conference!

Debra Goldfarb, President and CEO of Tabor Communications

Debra Goldfarb

Being an industry observer, I have seen a lot of change. Undoubtedly, we are entering an exciting innovation cycle in terms of technology, usage models and access. This year I have a few “rules of the road” which will guide my week in Austin:

Spend time on the periphery. there is a lot of interesting stuff to see which is not in the main hall, but rather in the small booths which sit out on the edges. This is where you can often get a window into “what’s next.” I will be looking for technology which enables productivity such as: appliances (application as well as infrastructure); development tools; application frameworks; energy efficiency concepts; and more adaptive access models (such as cloud or other webservice models).

Explore “Edge” HPC. Tabor Research is researching the use of HPC technologies and concepts outside of science and engineering. These include virtual worlds, ultra-scale infrastructure (such as search), complex event processing, and business optimization (such as real-time data mining). My goal is to better understand requirements, application evolution, and most importantly, what is in the “envelope” and what falls out.

The politics of science. Timing is everything and given the recent (and quite extraordinary) change in administration, it will be fascinating to get a read on what this means to this community. And, by the way, it should mean a lot in terms of priorities — in science, technology, industry and education.

Jack Dongarra, Distinguished Professor of EECS at the University of Tennessee

Jack Dongarra

I have attended all of the SC meetings, and won’t miss it for the world; it represents “Homecoming Week” for High Performance Computing.

This is truly an awesome time for high performance computing and computational science research, with a number of systems achieved performance exceeding the PFlop/s mark. There are a number of interesting problems that will need to be overcome as we are faced with systems with greater than a million threads of execution. Advancing to the next stage of growth for computational simulation and modeling will require us to solve basic research problems in Computer Science and Applied Mathematics at the same time as we create and promulgate a new paradigm for the development of scientific software.

To make progress on both fronts simultaneously will require a level of sustained, interdisciplinary collaboration among the core research communities that, in the past, has only been achieved by forming and supporting research centers dedicated to such a common purpose.

I see five important areas that will need attention; effective use of manycore and hybrid architectures, exploiting mixed precision in the algorithms, self adapting and auto tuning of software, fault tolerant algorithms, and communication avoiding algorithms.

William Feiereisen, Director DoD High Performance Computing, Lockheed Martin

William Feiereisen

I have attended most of the Supercomputing conferences since the 1980s. They always been one of the central opportunities to gather with virtually the entire scientific computing community and to see the latest developments in the spectrum of technologies that interact to make the field of supercomputing.

All of this is available in one place each year, everything from the applications and the important problems that they solve to the latest hardware upon which they run. There is a flavor of computational sciences which has always been my motivation and excitement about the field, but I also confess to not being immune to the latest raw hardware speed breakthroughs presented by each of the manufacturers.

I always plan my week around three things: the technical sessions and tutorials; the exhibits on the show floor; and increasingly in recent years, everyone else who attends and the possibility of much personal interaction. When I started my career, the technical sessions and tutorials dominated my time at SC, however I find much rewarding time is now spent in conversation over convention center coffee. Over the years I believe that many connections and ideas have been hatched at SC in just this way. There is a critical mass that gathers here each year and supports this atmosphere.

It’s for these reasons that I keep coming back each year and I look forward again this year to spending the week in Austin.

Marc Snir, Co-director, Universal Parallel Computing Research Center, University of Illinois

Marc Snir

Moore’s law does not mean, anymore, ever increasing processor performance; instead, it now means an ever increasing number of processors on a chip. Just waiting for processor performance to catch up to your needs is not an option, anymore; the only way to increase application performance, is to parallelize the application and scale it to an increasing number of processors.

This is a major new challenge. On the positive side, parallel programming is moving from being an esoteric art practiced by few experts into a a mainstream occupation. It becomes a major concern of large companies, such as Microsoft and Intel (see, for example, their investment in the Universal Parallel Computing Research Centers at Illinois and Berkeley).

This is an opportunity for the HPC community: Rather than building support for parallelism on top of sequential languages and programming environments, it becomes now possible to scaleup languages and environments that are build up-front to support parallelism and that are supported by massive investments.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

IBM Quantum Challenge 2021 – Let the Programming Begin!

May 17, 2021

Looking to sharpen or perhaps simply explore your quantum programming skills? On Thursday, IBM fires up its IBM Quantum Challenge 2021 marking the fifth anniversary of IBM Quantum Experience cloud services and the 40th  Read more…

Q&A with Altair CEO James Scapa, an HPCwire Person to Watch in 2021

May 14, 2021

Chairman, CEO and co-founder of Altair James R. Scapa closed several acquisitions for the company in 2020, including the purchase and integration of Univa and Ellexus. Scapa founded Altair more than 35 years ago with two Read more…

HLRS HPC Helps to Model Muscle Movements

May 13, 2021

The growing scale of HPC is allowing simulation of more and more complex systems at greater detail than ever before, particularly in the biological research spheres. Now, researchers at the University of Stuttgart are le Read more…

Behind the Met Office’s Procurement of a Billion-Dollar Microsoft System

May 13, 2021

The UK’s national weather service, the Met Office, caused shockwaves of curiosity a few weeks ago when it formally announced that its forthcoming billion-dollar supercomputer – expected to be the most powerful weather and climate-focused supercomputer in the world when it launches in 2022... Read more…

AMD, GlobalFoundries Commit to $1.6 Billion Wafer Supply Deal

May 13, 2021

AMD plans to purchase $1.6 billion worth of wafers from GlobalFoundries in the 2022 to 2024 timeframe, the chipmaker revealed today (May 13) in an SEC filing. In the face of global semiconductor shortages and record-high demand, AMD is renegotiating its Wafer Supply Agreement and bumping up capacity. Read more…

AWS Solution Channel

Numerical weather prediction on AWS Graviton2

The Weather Research and Forecasting (WRF) model is a numerical weather prediction (NWP) system designed to serve both atmospheric research and operational forecasting needs. Read more…

Hyperion Offers Snapshot of Quantum Computing Market

May 13, 2021

The nascent quantum computer (QC) market will grow 27 percent annually (CAGR) reaching $830 million in 2024 according to an update provided today by analyst firm Hyperion Research at the HPC User Forum being held this we Read more…

Behind the Met Office’s Procurement of a Billion-Dollar Microsoft System

May 13, 2021

The UK’s national weather service, the Met Office, caused shockwaves of curiosity a few weeks ago when it formally announced that its forthcoming billion-dollar supercomputer – expected to be the most powerful weather and climate-focused supercomputer in the world when it launches in 2022... Read more…

AMD, GlobalFoundries Commit to $1.6 Billion Wafer Supply Deal

May 13, 2021

AMD plans to purchase $1.6 billion worth of wafers from GlobalFoundries in the 2022 to 2024 timeframe, the chipmaker revealed today (May 13) in an SEC filing. In the face of global semiconductor shortages and record-high demand, AMD is renegotiating its Wafer Supply Agreement and bumping up capacity. Read more…

Hyperion Offers Snapshot of Quantum Computing Market

May 13, 2021

The nascent quantum computer (QC) market will grow 27 percent annually (CAGR) reaching $830 million in 2024 according to an update provided today by analyst fir Read more…

Hyperion: HPC Server Market Ekes 1 Percent Gain in 2020, Storage Poised for ‘Tipping Point’

May 12, 2021

The HPC User Forum meeting taking place virtually this week (May 11-13) kicked off with Hyperion Research’s market update, covering the 2020 period. Although Read more…

IBM Debuts Qiskit Runtime for Quantum Computing; Reports Dramatic Speed-up

May 11, 2021

In conjunction with its virtual Think event, IBM today introduced an enhanced Qiskit Runtime Software for quantum computing, which it says demonstrated 120x spe Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base cl Read more…

Fast Pass Through (Some of) the Quantum Landscape with ORNL’s Raphael Pooser

May 7, 2021

In a rather remarkable way, and despite the frequent hype, the behind-the-scenes work of developing quantum computing has dramatically accelerated in the past f Read more…

IBM Research Debuts 2nm Test Chip with 50 Billion Transistors

May 6, 2021

IBM Research today announced the successful prototyping of the world's first 2 nanometer chip, fabricated with silicon nanosheet technology on a standard 300mm Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base cl Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers

Contributors

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

Microsoft to Provide World’s Most Powerful Weather & Climate Supercomputer for UK’s Met Office

April 22, 2021

More than 14 months ago, the UK government announced plans to invest £1.2 billion ($1.56 billion) into weather and climate supercomputing, including procuremen Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire