What Mainstream Supercomputing Might Mean for the World

By Gareth Spence

April 3, 2012

The age of “mainstream supercomputing” has been forecast for some years. There has even arisen something of a debate as to whether such a concept is even possible – does “supercomputing,” by definition, cease being “super” the moment it becomes “mainstream?”

digital keyholeWhether mainstream supercomputing is here or ever literally can be, however, it is indisputable that more and more powerful capabilities are becoming available to more and more diverse users. The power of today’s typical workstations exceeds that which constituted supercomputing not very long ago.

The question now is, where all of this processing power – increasingly “democratized” – might eventually take the world? There are clues today of what mind-blowing benefits this rapidly evolving technology might yield tomorrow.

Better Products Faster – and Beyond

Supercomputing already undergirds some of the world’s most powerful state-of-the-art applications.

Computational fluid dynamics (CFD) is a prime example. In CFD, the flow and interaction of liquids and gases can be simulated and analyzed, enabling predictions and planning in a host of activities, such as developing better drug-delivery systems, assisting manufacturers in achieving compliance with environmental regulations and improving building comfort, safety and energy efficiency.

Supercomputing has also enabled more rapid and accurate finite element analysis (FEA), which players in the aerospace, automotive and other industries use in defining design parameters, prototyping products and analyzing the impact of different stresses on a design before manufacturing begins. As in CFD, the benefits include slashed product-development cycles and costs and more reliable products – in short, better products faster.

Weather forecasting and algorithmic trading are other applications that today rely heavily on supercomputing. Indeed, supercomputing is emerging as a differentiating factor in global competition across industries.

More Power to More People

As supercomputing’s enabling technologies – datacenter interconnection via fiber-optic networks and protocol-agnostic, low-latency Dense Wavelength Division Multiplexing (DWDM) techniques, processors, storage, memory, etc. – have grown ever more powerful, access to the capability has grown steadily more democratized. The introduction of tools such as Compute Unified Device Architecture (CUDA) and Open Computing Language (OpenCL) have simplified the processes of creating programs to run across the heterogeneous gamut of compute cores. And there have emerged offers of high-performance computing (HPC) as a service.

Amazon Web Services (AWS), for example, has garnered significant attention with the rollout of an HPC offering that allows customers to select from a menu of elastic resources and pricing models. “Customers can choose from Cluster Compute or Cluster GPU instances within a full-bisection high bandwidth network for tightly-coupled and IO-intensive workloads or scale out across thousands of cores for throughput-oriented applications,” the company says. “Today, AWS customers run a variety of HPC applications on these instances including Computer Aided Engineering, molecular modeling, genome analysis, and numerical modeling across many industries including Biopharma, Oil and Gas, Financial Services and Manufacturing. In addition, academic researchers are leveraging Amazon EC2 Cluster instances to perform research in physics, chemistry, biology, computer science, and materials science.”

These technological and business developments within supercomputing have met with a gathering external enthusiasm to harness “Big Data.” More organizations of more types are seeking to process and base decision-making on more data from more sources than ever before.

The result of the convergence of these trends is that supercomputing – once strictly the domain of the world’s largest government agencies, research-and-education institutions, pharmaceutical companies and the few other giant enterprises with the resources to build (and power) clusters at tremendous cost – is gaining an increasingly mainstream base of users.

The Political Push

Political leaders in nations around the world see in supercomputing an opportunity to better protect their citizens and/or to enhance or at least maintain their economies’ standing in the global marketplace.

India, for example, is investing in a plan to indigenously develop by 2017 a supercomputer that it believes will be the fastest in the world – one delivering a performance of 132 quintillion operations per second. Today’s speed leader, per the November 2011 TOP500 List of the world’s fastest supercomputers, is a Japanese model that checks in at a mere 10 quadrillion calculations per second. India’s goals for its investments are said to include enhancing its space-exploration program, monsoon forecasting and agricultural outputs.

Similar news has come out of the European Union. The European Commission’s motivation for doubling its HPC ante was reported to strengthen its presence on the TOP500 List and to protect and create jobs in the EU. Part of the plan is to encourage supercomputing usage among small and medium-sized enterprises (SMEs), especially.

SMEs are the focus of a pilot U.S. program, too.

For SMEs who are looking to advance their use of existing MS&A (modeling, simulation and analysis), access to HPC platforms is critical in order to increase the accuracy of their calculations (toward predictive capability), and decrease the time to solution so the design and production cycle can be reduced, thus improving productivity and time to market,” reads the overview for the National Digital Engineering and Manufacturing Consortium (NDEMC).

The motivation here is not simply to level the playing the field for smaller businesses that are struggling to compete with larger ones. Big OEMs, in fact, help identify the SMEs who might be candidates for participating in the NDEMC effort launched with funding from the U.S. Department of Commerce, state governments and private companies. One of the goals is to extend the product-development efficiencies and -quality enhancements that HPC has already brought to the big OEMs to the smaller partners throughout their manufacturing supply chains.

Reasons the NDEMC: “The network of OEMS, SMEs, solution providers, and collaborators that make up the NDEMC will result in accelerated innovation through the use of advanced technology, and an ecosystem of like-minded companies. The goal is greater productivity and profits for all players through an increase of manufacturing jobs remaining in and coming back to the U.S. (i.e. onshoring/reshoring) and increases in U.S. exports.”

Frontiers of Innovation

Where might this democratization of supercomputing’s benefits take the world? How might the extension of this type of processing power to mass audiences ultimately impact our society and shared future? Some of today’s most provocative applications offer a peak into the revolutionary potential of supercomputing.

For example, Harvard Medical School’s Laboratory of Personalized Medicine is leveraging Amazon’s Elastic Compute Cloud service in developing “whole genome analysis testing models in record time,” according to an Amazon Web Services case study. By creating and provisioning scalable computing capacity in the cloud within minutes, the Harvard Medical School lab is able to more quickly execute its work in helping craft revolutionary preventive healthcare strategies that are tailored to individuals’ genetic characteristics.

Other organizations are leveraging Amazon’s high-performance computing services for optimizing wind-power installations, processing high-resolution satellite images and enabling innovations in the methods of reporting and consuming news.

Similarly, an association of R&E institutions in Italy’s Trieste territory, “LightNet,” has launched a network that allows its users to dynamically configure state-of-the-art services. Leveraging a carrier-class, 40Gbit/s DWDM solution for high-speed connectivity and dynamic bandwidth allocation, LightNet supports multi-site computation and data mining – as well as operation of virtual laboratories and digital libraries, high-definition broadcasts of surgical operations, remote control of microscopes, etc. – across a topology of interconnected, redundant fiber rings spanning 320 kilometers.

Already we are seeing proof that supercomputing enables new questions to be both asked and answered. That trend will only intensify as more of the world’s most creative and keenest thinkers are availed to the breakthrough capability.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown and Spectre security updates on the performance of popular H Read more…

By Tiffany Trader

Fostering Lustre Advancement Through Development and Contributions

January 17, 2018

Six months after organizational changes at Intel's High Performance Data (HPDD) division, most in the Lustre community have shed any initial apprehension around the potential changes that could affect or disrupt Lustre Read more…

By Carlos Aoki Thomaz

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and received a patent for a "processor design, which allows rep Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

HPE and NREL Take Steps to Create a Sustainable, Energy-Efficient Data Center with an H2 Fuel Cell

As enterprises attempt to manage rising volumes of data, unplanned data center outages are becoming more common and more expensive. As the cost of downtime rises, enterprises lose out on productivity and valuable competitive advantage without access to their critical data. Read more…

SRC Spends $200M on University Research Centers

January 16, 2018

The Semiconductor Research Corporation, as part of its JUMP initiative, has awarded $200 million to fund six research centers whose areas of focus span cognitive computing, memory-centric computing, high-speed communicat Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Fostering Lustre Advancement Through Development and Contributions

January 17, 2018

Six months after organizational changes at Intel's High Performance Data (HPDD) division, most in the Lustre community have shed any initial apprehension aroun Read more…

By Carlos Aoki Thomaz

SRC Spends $200M on University Research Centers

January 16, 2018

The Semiconductor Research Corporation, as part of its JUMP initiative, has awarded $200 million to fund six research centers whose areas of focus span cognitiv Read more…

By John Russell

When the Chips Are Down

January 11, 2018

In the last article, "The High Stakes Semiconductor Game that Drives HPC Diversity," I alluded to the challenges facing the semiconductor industry and how that may impact the evolution of HPC systems over the next few years. I thought I’d lift the covers a little and look at some of the commercial challenges that impact the component technology we use in HPC. Read more…

By Dairsie Latimer

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

ANL’s Rick Stevens on CANDLE, ARM, Quantum, and More

January 8, 2018

Late last year HPCwire caught up with Rick Stevens, associate laboratory director for computing, environment and life Sciences at Argonne National Laboratory, f Read more…

By John Russell

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Leading Solution Providers

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Nvidia, Partners Announce Several V100 Servers

September 27, 2017

Here come the Volta 100-based servers. Nvidia today announced an impressive line-up of servers from major partners – Dell EMC, Hewlett Packard Enterprise, IBM Read more…

By John Russell

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

  • arrow
  • Click Here for More Headlines
  • arrow
Share This