What Mainstream Supercomputing Might Mean for the World

By Gareth Spence

April 3, 2012

The age of “mainstream supercomputing” has been forecast for some years. There has even arisen something of a debate as to whether such a concept is even possible – does “supercomputing,” by definition, cease being “super” the moment it becomes “mainstream?”

digital keyholeWhether mainstream supercomputing is here or ever literally can be, however, it is indisputable that more and more powerful capabilities are becoming available to more and more diverse users. The power of today’s typical workstations exceeds that which constituted supercomputing not very long ago.

The question now is, where all of this processing power – increasingly “democratized” – might eventually take the world? There are clues today of what mind-blowing benefits this rapidly evolving technology might yield tomorrow.

Better Products Faster – and Beyond

Supercomputing already undergirds some of the world’s most powerful state-of-the-art applications.

Computational fluid dynamics (CFD) is a prime example. In CFD, the flow and interaction of liquids and gases can be simulated and analyzed, enabling predictions and planning in a host of activities, such as developing better drug-delivery systems, assisting manufacturers in achieving compliance with environmental regulations and improving building comfort, safety and energy efficiency.

Supercomputing has also enabled more rapid and accurate finite element analysis (FEA), which players in the aerospace, automotive and other industries use in defining design parameters, prototyping products and analyzing the impact of different stresses on a design before manufacturing begins. As in CFD, the benefits include slashed product-development cycles and costs and more reliable products – in short, better products faster.

Weather forecasting and algorithmic trading are other applications that today rely heavily on supercomputing. Indeed, supercomputing is emerging as a differentiating factor in global competition across industries.

More Power to More People

As supercomputing’s enabling technologies – datacenter interconnection via fiber-optic networks and protocol-agnostic, low-latency Dense Wavelength Division Multiplexing (DWDM) techniques, processors, storage, memory, etc. – have grown ever more powerful, access to the capability has grown steadily more democratized. The introduction of tools such as Compute Unified Device Architecture (CUDA) and Open Computing Language (OpenCL) have simplified the processes of creating programs to run across the heterogeneous gamut of compute cores. And there have emerged offers of high-performance computing (HPC) as a service.

Amazon Web Services (AWS), for example, has garnered significant attention with the rollout of an HPC offering that allows customers to select from a menu of elastic resources and pricing models. “Customers can choose from Cluster Compute or Cluster GPU instances within a full-bisection high bandwidth network for tightly-coupled and IO-intensive workloads or scale out across thousands of cores for throughput-oriented applications,” the company says. “Today, AWS customers run a variety of HPC applications on these instances including Computer Aided Engineering, molecular modeling, genome analysis, and numerical modeling across many industries including Biopharma, Oil and Gas, Financial Services and Manufacturing. In addition, academic researchers are leveraging Amazon EC2 Cluster instances to perform research in physics, chemistry, biology, computer science, and materials science.”

These technological and business developments within supercomputing have met with a gathering external enthusiasm to harness “Big Data.” More organizations of more types are seeking to process and base decision-making on more data from more sources than ever before.

The result of the convergence of these trends is that supercomputing – once strictly the domain of the world’s largest government agencies, research-and-education institutions, pharmaceutical companies and the few other giant enterprises with the resources to build (and power) clusters at tremendous cost – is gaining an increasingly mainstream base of users.

The Political Push

Political leaders in nations around the world see in supercomputing an opportunity to better protect their citizens and/or to enhance or at least maintain their economies’ standing in the global marketplace.

India, for example, is investing in a plan to indigenously develop by 2017 a supercomputer that it believes will be the fastest in the world – one delivering a performance of 132 quintillion operations per second. Today’s speed leader, per the November 2011 TOP500 List of the world’s fastest supercomputers, is a Japanese model that checks in at a mere 10 quadrillion calculations per second. India’s goals for its investments are said to include enhancing its space-exploration program, monsoon forecasting and agricultural outputs.

Similar news has come out of the European Union. The European Commission’s motivation for doubling its HPC ante was reported to strengthen its presence on the TOP500 List and to protect and create jobs in the EU. Part of the plan is to encourage supercomputing usage among small and medium-sized enterprises (SMEs), especially.

SMEs are the focus of a pilot U.S. program, too.

For SMEs who are looking to advance their use of existing MS&A (modeling, simulation and analysis), access to HPC platforms is critical in order to increase the accuracy of their calculations (toward predictive capability), and decrease the time to solution so the design and production cycle can be reduced, thus improving productivity and time to market,” reads the overview for the National Digital Engineering and Manufacturing Consortium (NDEMC).

The motivation here is not simply to level the playing the field for smaller businesses that are struggling to compete with larger ones. Big OEMs, in fact, help identify the SMEs who might be candidates for participating in the NDEMC effort launched with funding from the U.S. Department of Commerce, state governments and private companies. One of the goals is to extend the product-development efficiencies and -quality enhancements that HPC has already brought to the big OEMs to the smaller partners throughout their manufacturing supply chains.

Reasons the NDEMC: “The network of OEMS, SMEs, solution providers, and collaborators that make up the NDEMC will result in accelerated innovation through the use of advanced technology, and an ecosystem of like-minded companies. The goal is greater productivity and profits for all players through an increase of manufacturing jobs remaining in and coming back to the U.S. (i.e. onshoring/reshoring) and increases in U.S. exports.”

Frontiers of Innovation

Where might this democratization of supercomputing’s benefits take the world? How might the extension of this type of processing power to mass audiences ultimately impact our society and shared future? Some of today’s most provocative applications offer a peak into the revolutionary potential of supercomputing.

For example, Harvard Medical School’s Laboratory of Personalized Medicine is leveraging Amazon’s Elastic Compute Cloud service in developing “whole genome analysis testing models in record time,” according to an Amazon Web Services case study. By creating and provisioning scalable computing capacity in the cloud within minutes, the Harvard Medical School lab is able to more quickly execute its work in helping craft revolutionary preventive healthcare strategies that are tailored to individuals’ genetic characteristics.

Other organizations are leveraging Amazon’s high-performance computing services for optimizing wind-power installations, processing high-resolution satellite images and enabling innovations in the methods of reporting and consuming news.

Similarly, an association of R&E institutions in Italy’s Trieste territory, “LightNet,” has launched a network that allows its users to dynamically configure state-of-the-art services. Leveraging a carrier-class, 40Gbit/s DWDM solution for high-speed connectivity and dynamic bandwidth allocation, LightNet supports multi-site computation and data mining – as well as operation of virtual laboratories and digital libraries, high-definition broadcasts of surgical operations, remote control of microscopes, etc. – across a topology of interconnected, redundant fiber rings spanning 320 kilometers.

Already we are seeing proof that supercomputing enables new questions to be both asked and answered. That trend will only intensify as more of the world’s most creative and keenest thinkers are availed to the breakthrough capability.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized silicon designs catered toward general-purpose cloud computing Read more…

By Tiffany Trader

The Internet of Criminal Things—Trust in the Gods but Verify!

February 20, 2019

“Are we under attack?” asked Professor Elmarie Biermann of the Cyber Security Institute during the recent South African Centre for High Performance Computing’s (CHPC) National Conference in Cape Town. A quick show Read more…

By Elizabeth Leake, STEM-Trek

Machine Learning Takes Heat for Science’s Reproducibility Crisis

February 19, 2019

Scientists are raising red flags about the accuracy and reproducibility of conclusions drawn by machine learning frameworks. Among the remedies are developing new ML systems that can question their own predictions, show Read more…

By George Leopold

HPE Extreme Performance Solutions

HPE and Intel® Omni-Path Architecture: How to Power a Cloud

Learn how HPE and Intel® Omni-Path Architecture provide critical infrastructure for leading Nordic HPC provider’s HPCFLOW cloud service.

powercloud_blog.jpgFor decades, HPE has been at the forefront of high-performance computing, and we’ve powered some of the fastest and most robust supercomputers in the world. Read more…

IBM Accelerated Insights

The Perils of Becoming Trapped in the Cloud

Terms like ‘open systems’ have been bandied about for decades. While modern computer systems are relatively open compared to their predecessors, there are still plenty of opportunities to become locked into proprietary interfaces. Read more…

What’s New in HPC Research: Wind Farms, Gravitational Lenses, Web Portals & More

February 19, 2019

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized si Read more…

By Tiffany Trader

Insights from Optimized Codes on Cineca’s Marconi

February 15, 2019

What can you do with 381,392 CPU cores? For Cineca, it means enabling computational scientists to expand a large part of the world’s body of knowledge from th Read more…

By Ken Strandberg

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

UC Berkeley Paper Heralds Rise of Serverless Computing in the Cloud – Do You Agree?

February 13, 2019

Almost exactly ten years to the day from publishing of their widely-read, seminal paper on cloud computing, UC Berkeley researchers have issued another ambitious examination of cloud computing - Cloud Programming Simplified: A Berkeley View on Serverless Computing. The new work heralds the rise of ‘serverless computing’ as the next dominant phase of cloud computing. Read more…

By John Russell

Iowa ‘Grows Its Own’ to Fill the HPC Workforce Pipeline

February 13, 2019

The global workforce that supports advanced computing, scientific software and high-speed research networks is relatively small when you stop to consider the magnitude of the transformative discoveries it empowers. Technical conferences provide a forum where specialists convene to learn about the latest innovations and schedule face-time with colleagues from other institutions. Read more…

By Elizabeth Leake, STEM-Trek

Trump Signs Executive Order Launching U.S. AI Initiative

February 11, 2019

U.S. President Donald Trump issued an Executive Order (EO) today launching a U.S Artificial Intelligence Initiative. The new initiative - Maintaining American L Read more…

By John Russell

Celebrating Women in Science: Meet Four Women Leading the Way in HPC

February 11, 2019

One only needs to look around at virtually any CS/tech conference to realize that women are underrepresented, and that holds true of HPC. SC hosts over 13,000 H Read more…

By AJ Lauer

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

AMD Sets Up for Epyc Epoch

November 16, 2018

It’s been a good two weeks, AMD’s Gary Silcott and Andy Parma told me on the last day of SC18 in Dallas at the restaurant where we met to discuss their show news and recent successes. Heck, it’s been a good year. Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

Contract Signed for New Finnish Supercomputer

December 13, 2018

After the official contract signing yesterday, configuration details were made public for the new BullSequana system that the Finnish IT Center for Science (CSC Read more…

By Tiffany Trader

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

HPC Reflections and (Mostly Hopeful) Predictions

December 19, 2018

So much ‘spaghetti’ gets tossed on walls by the technology community (vendors and researchers) to see what sticks that it is often difficult to peer through Read more…

By John Russell

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, product photos, and even a beautiful image of supernovae... Read more…

By John Russell

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

Intel Confirms 48-Core Cascade Lake-AP for 2019

November 4, 2018

As part of the run-up to SC18, taking place in Dallas next week (Nov. 11-16), Intel is doling out info on its next-gen Cascade Lake family of Xeon processors, specifically the “Advanced Processor” version (Cascade Lake-AP), architected for high-performance computing, artificial intelligence and infrastructure-as-a-service workloads. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This