What Mainstream Supercomputing Might Mean for the World

By Gareth Spence

April 3, 2012

The age of “mainstream supercomputing” has been forecast for some years. There has even arisen something of a debate as to whether such a concept is even possible – does “supercomputing,” by definition, cease being “super” the moment it becomes “mainstream?”

digital keyholeWhether mainstream supercomputing is here or ever literally can be, however, it is indisputable that more and more powerful capabilities are becoming available to more and more diverse users. The power of today’s typical workstations exceeds that which constituted supercomputing not very long ago.

The question now is, where all of this processing power – increasingly “democratized” – might eventually take the world? There are clues today of what mind-blowing benefits this rapidly evolving technology might yield tomorrow.

Better Products Faster – and Beyond

Supercomputing already undergirds some of the world’s most powerful state-of-the-art applications.

Computational fluid dynamics (CFD) is a prime example. In CFD, the flow and interaction of liquids and gases can be simulated and analyzed, enabling predictions and planning in a host of activities, such as developing better drug-delivery systems, assisting manufacturers in achieving compliance with environmental regulations and improving building comfort, safety and energy efficiency.

Supercomputing has also enabled more rapid and accurate finite element analysis (FEA), which players in the aerospace, automotive and other industries use in defining design parameters, prototyping products and analyzing the impact of different stresses on a design before manufacturing begins. As in CFD, the benefits include slashed product-development cycles and costs and more reliable products – in short, better products faster.

Weather forecasting and algorithmic trading are other applications that today rely heavily on supercomputing. Indeed, supercomputing is emerging as a differentiating factor in global competition across industries.

More Power to More People

As supercomputing’s enabling technologies – datacenter interconnection via fiber-optic networks and protocol-agnostic, low-latency Dense Wavelength Division Multiplexing (DWDM) techniques, processors, storage, memory, etc. – have grown ever more powerful, access to the capability has grown steadily more democratized. The introduction of tools such as Compute Unified Device Architecture (CUDA) and Open Computing Language (OpenCL) have simplified the processes of creating programs to run across the heterogeneous gamut of compute cores. And there have emerged offers of high-performance computing (HPC) as a service.

Amazon Web Services (AWS), for example, has garnered significant attention with the rollout of an HPC offering that allows customers to select from a menu of elastic resources and pricing models. “Customers can choose from Cluster Compute or Cluster GPU instances within a full-bisection high bandwidth network for tightly-coupled and IO-intensive workloads or scale out across thousands of cores for throughput-oriented applications,” the company says. “Today, AWS customers run a variety of HPC applications on these instances including Computer Aided Engineering, molecular modeling, genome analysis, and numerical modeling across many industries including Biopharma, Oil and Gas, Financial Services and Manufacturing. In addition, academic researchers are leveraging Amazon EC2 Cluster instances to perform research in physics, chemistry, biology, computer science, and materials science.”

These technological and business developments within supercomputing have met with a gathering external enthusiasm to harness “Big Data.” More organizations of more types are seeking to process and base decision-making on more data from more sources than ever before.

The result of the convergence of these trends is that supercomputing – once strictly the domain of the world’s largest government agencies, research-and-education institutions, pharmaceutical companies and the few other giant enterprises with the resources to build (and power) clusters at tremendous cost – is gaining an increasingly mainstream base of users.

The Political Push

Political leaders in nations around the world see in supercomputing an opportunity to better protect their citizens and/or to enhance or at least maintain their economies’ standing in the global marketplace.

India, for example, is investing in a plan to indigenously develop by 2017 a supercomputer that it believes will be the fastest in the world – one delivering a performance of 132 quintillion operations per second. Today’s speed leader, per the November 2011 TOP500 List of the world’s fastest supercomputers, is a Japanese model that checks in at a mere 10 quadrillion calculations per second. India’s goals for its investments are said to include enhancing its space-exploration program, monsoon forecasting and agricultural outputs.

Similar news has come out of the European Union. The European Commission’s motivation for doubling its HPC ante was reported to strengthen its presence on the TOP500 List and to protect and create jobs in the EU. Part of the plan is to encourage supercomputing usage among small and medium-sized enterprises (SMEs), especially.

SMEs are the focus of a pilot U.S. program, too.

For SMEs who are looking to advance their use of existing MS&A (modeling, simulation and analysis), access to HPC platforms is critical in order to increase the accuracy of their calculations (toward predictive capability), and decrease the time to solution so the design and production cycle can be reduced, thus improving productivity and time to market,” reads the overview for the National Digital Engineering and Manufacturing Consortium (NDEMC).

The motivation here is not simply to level the playing the field for smaller businesses that are struggling to compete with larger ones. Big OEMs, in fact, help identify the SMEs who might be candidates for participating in the NDEMC effort launched with funding from the U.S. Department of Commerce, state governments and private companies. One of the goals is to extend the product-development efficiencies and -quality enhancements that HPC has already brought to the big OEMs to the smaller partners throughout their manufacturing supply chains.

Reasons the NDEMC: “The network of OEMS, SMEs, solution providers, and collaborators that make up the NDEMC will result in accelerated innovation through the use of advanced technology, and an ecosystem of like-minded companies. The goal is greater productivity and profits for all players through an increase of manufacturing jobs remaining in and coming back to the U.S. (i.e. onshoring/reshoring) and increases in U.S. exports.”

Frontiers of Innovation

Where might this democratization of supercomputing’s benefits take the world? How might the extension of this type of processing power to mass audiences ultimately impact our society and shared future? Some of today’s most provocative applications offer a peak into the revolutionary potential of supercomputing.

For example, Harvard Medical School’s Laboratory of Personalized Medicine is leveraging Amazon’s Elastic Compute Cloud service in developing “whole genome analysis testing models in record time,” according to an Amazon Web Services case study. By creating and provisioning scalable computing capacity in the cloud within minutes, the Harvard Medical School lab is able to more quickly execute its work in helping craft revolutionary preventive healthcare strategies that are tailored to individuals’ genetic characteristics.

Other organizations are leveraging Amazon’s high-performance computing services for optimizing wind-power installations, processing high-resolution satellite images and enabling innovations in the methods of reporting and consuming news.

Similarly, an association of R&E institutions in Italy’s Trieste territory, “LightNet,” has launched a network that allows its users to dynamically configure state-of-the-art services. Leveraging a carrier-class, 40Gbit/s DWDM solution for high-speed connectivity and dynamic bandwidth allocation, LightNet supports multi-site computation and data mining – as well as operation of virtual laboratories and digital libraries, high-definition broadcasts of surgical operations, remote control of microscopes, etc. – across a topology of interconnected, redundant fiber rings spanning 320 kilometers.

Already we are seeing proof that supercomputing enables new questions to be both asked and answered. That trend will only intensify as more of the world’s most creative and keenest thinkers are availed to the breakthrough capability.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

SC17 Keynote – HPC Powers SKA Efforts to Peer Deep into the Cosmos

November 17, 2017

Thus week’s SC17 keynote – Life, the Universe and Computing: The Story of the SKA Telescope – was a powerful pitch for the potential of Big Science projects that also showcased the foundational role of high perform Read more…

By John Russell

SC17 Student Cluster Competition Configurations: Fewer Nodes, Way More Accelerators

November 16, 2017

The final configurations for each of the SC17 “Donnybrook in Denver” Student Cluster Competition have been released. Fortunately, each team received their equipment shipments on time and undamaged, so the teams are r Read more…

By Dan Olds

Student Clusterers Demolish HPCG Record! Nanyang Sweeps Benchmarks

November 16, 2017

Nanyang pulled off the always difficult double-play at this year’s SC Student Cluster Competition. The plucky team from Singapore posted a world record LINPACK, thus taking the Highest LINPACK Award, but also managed t Read more…

By Dan Olds

HPE Extreme Performance Solutions

Harness Scalable Petabyte Storage with HPE Apollo 4510 and HPE StoreEver

As a growing number of connected devices challenges IT departments to rapidly collect, manage, and store troves of data, organizations must adopt a new generation of IT to help them operate quickly and intelligently. Read more…

Student Cluster LINPACK Record Shattered! More LINs Packed Than Ever before!

November 16, 2017

Nanyang Technological University, the pride of Singapore, utterly destroyed the Student Cluster Competition LINPACK record by posting a score of 51.77 TFlop/s at SC17 in Denver. The previous record, established by German Read more…

By Dan Olds

SC17 Keynote – HPC Powers SKA Efforts to Peer Deep into the Cosmos

November 17, 2017

Thus week’s SC17 keynote – Life, the Universe and Computing: The Story of the SKA Telescope – was a powerful pitch for the potential of Big Science projec Read more…

By John Russell

Student Cluster LINPACK Record Shattered! More LINs Packed Than Ever before!

November 16, 2017

Nanyang Technological University, the pride of Singapore, utterly destroyed the Student Cluster Competition LINPACK record by posting a score of 51.77 TFlop/s a Read more…

By Dan Olds

Hyperion Market Update: ‘Decent’ Growth Led by HPE; AI Transparency a Risk Issue

November 15, 2017

The HPC market update from Hyperion Research (formerly IDC) at the annual SC conference is a business and social “must,” and this year’s presentation at S Read more…

By Doug Black

Nvidia Focuses Its Cloud Containers on HPC Applications

November 14, 2017

Having migrated its top-of-the-line datacenter GPU to the largest cloud vendors, Nvidia is touting its Volta architecture for a range of scientific computing ta Read more…

By George Leopold

HPE Launches ARM-based Apollo System for HPC, AI

November 14, 2017

HPE doubled down on its memory-driven computing vision while expanding its processor portfolio with the announcement yesterday of the company’s first ARM-base Read more…

By Doug Black

OpenACC Shines in Global Climate/Weather Codes

November 14, 2017

OpenACC, the directive-based parallel programming model used mostly for porting codes to GPUs for use on heterogeneous systems, came to SC17 touting impressive Read more…

By John Russell

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Leading Solution Providers

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This