The Top Supercomputing Led Discoveries of 2013

By Nicole Hemsoth

January 2, 2014

2013 has been an incredible year for the entire ecosystem around supercomputing; from vendors pushing new technologies to boost performance, capacity, and programmability to researchers turning over new insights with fresh techniques. While exascale has taken more of a backseat than we might have predicted at the year’s end of 2010, there are plenty of signs that production HPC environments are blazing plenty of new trails.

As the calendar flips into 2014, we wanted to cast a backward glance at select discoveries and opportunities made possible by the fastest systems in the world and the people who run them—all pulled from our news archives of the past year along some important thematic lines.

We’ve pulled over 30 examples of how supercomputers are set to change the world in 2014 and beyond and while this list is anything but exhaustive, it does show how key segments in research and industry are evolving with HPC.

In a Galaxy Far, Far Away…

galaxyOne of the most famous “showcase” areas where HPC finds a mainstream shine is when news breaks of startling answers emerge to questions as big as “where do we come from” and “what is the universe made of.” As one might expect, 2013 was a banner year for discoveries that reached well beyond earth.

This year, Kraken at the National Institute for Computational Sciences (NICS) at the University of Tennessee Knoxville, addressed some large-scale, stubborn classical physics problems with revolutionary protoplanetary disk research while another massive system, the Opteron-powered “Hopper” Cray XE6 system that is part of the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Lab lit up the comos.

Back in October, a team of scientists from ETH Zurich and the University of Leeds solved a 300-year-old riddle about the nature of the Earth’s rotation. Using the Cray XE6 supercomputer “Monte Rosa” installed at CSCS, the researchers uncovered the reason for the gradual westward movement of the Earth’s magnetic field.

This past November, astrophysics researchers at UC San Diego advanced their understanding of star formation with the help of some major computational resources from San Diego Supercomputer Center (SDSC) at the UC San Diego and the National Institute for Computational Science (NICS) at Oak Ridge National Laboratory.

Researchers at the Universities of Leeds and Chicago harnessed supercomputing power to uncover an important mechanism behind the generation of astrophysical magnetic fields such as that of the sun. And researchers at the Institute for Computational Cosmology (ICC), are using HPC to model phenomena ranging from solar flares to the formation of galaxies. Others, including an NSF-supported team from the University of Iowa used 2013 and plenty of supercomputing might to measure space turbulence directly for the first time in the laboratory, allowing the world to finally see the dynamics behind it.

Some spent some of their year bolstering current resources to aid in new discoveries. For instance, a new 60 teraflops supercomputer and 1 petabyte high speed storage system recently installed on the campus of the University of California at Santa Cruz, which will give astrophysicists at the college the computational and storage headroom they need to model the heavens like never before.

Medical Discovery and Research

CONNECTOME3Ever-growing datasets fed by information from a widening pool of sources pushed medical research even further into supercomputing territory this year. The modeling and simulation requirements of viruses, genomic data, organ and

Supercomputing’s reach into the human brain was one of the most widely-cited research items in medical circles in 2013. This year the Human Brain Project was carried forward by a host of supporting institutions and systems and the topic was the subject of several lectures and keynotes that are worthy of reviewing before a fresh year begins.

Cancer research is another important field that is increasingly reliant on powerful systems. For instance, this year, researchers at Emory University reported a significant improvement in their ability to analyze and understand changes of cancer tumors over time thanks to HPC work done on a Keeneland Project supercomputer. Analysis of high resolution cancer tumor images that used to take weeks can now be completed in a matter of minutes on the hybrid GPU-CPU system.

Complex viruses, including HIV/AIDS were also the subject of a great deal of supercomputer-powered research this year. Researchers at the University of Illinois Urbana-Champaign have successfully modeled the interior of the HIV-1 virus using the Blue Waters system, opening the door to new antiretroviral drugs that target HIV-1, the virus that causes AIDS.

Other viruses, including malaria, were the target of additional innovative research. Pittsburgh Supercomputing Center (PSC) and the University of Notre Dame received up to $1.6 million in funding from the Bill & Melinda Gates Foundation to develop a system of computers and software for the Vector Ecology and Control Network (VECNet), an international consortium to eradicate malaria. The new VECNet Cyber-Infrastructure Project (CI) will support VECNet’s effort to unite research, industrial and public policy efforts to attack one of the worst diseases in the developing world in more effective, economical ways.

Armed with vaccines, however, viruses can be stopped in their tracks, assuming the delivery of such life-saving measures is done effectively. A supercomputer simulation of the West African nation of Niger showed that improving transportation as well could improve vaccine availability among children and mothers from roughly 50 percent to more than 90 percent.

On another front, researchers came a step closer to understanding strokes this year. A team from UC Berkeley and the University of California San Diego (UCSD) used the supercomputing resources of the National Energy Research Scientific Computing Center (NERSC) to model the efficacy of microbubbles and high intensity focused ultrasound (HIFU) for breaking up stroke-causing clots.

Other noteworthy advances powered by world-class systems emerged this year in medical areas as diverse as autism research and pushing new boundaries in medical physics. As ever-more computing capacity comes online in the coming year, we expect the diverse medical field to produce stunning stories and discoveries in 2014.

Climate Change and Earth Science

earfThe volumes of scientific support are growing in support of climate change, a process which has been powered by massive simulations, including those that put the changes in the context of global shifts over vast lengths of time.

For example, HPCwire’s Tiffany Trader wrote back in September on the “Climate Time Machine”, which relies on a database of extreme global weather events from 1871 to the present day, culled from newspaper weather reports, measurements on land and sea for the first decades along with more modern data. The team of top climate scientists fed the data into powerful supercomputers, including those at NERSC and the Oak Ridge Leadership Computing Facility in Tennessee, to create a virtual climate time machine. A sizable portion (12 percent) of the supercomputing resources at NERSC is allocated to global climate change research. That’s nearly 150 million processor-hours of highly-tuned computational might focused on an issue that is critical to humanity’s future.

New systems emerged to tackle climate change data. For instance, Berkeley Lab’s Green Flash is a specialized supercomputer designed to showcase a way to perform more detailed climate modeling. The system uses customized Tensilica-based processors, similar to those found in iPhones, and communication-minimizing algorithms that cut down on the movement of data, to model the movement of clouds around the earth at a higher resolution than was previously possible, without consuming huge amounts of electricity.

This year large-scale systems, like Blue Waters at NCSA, were used by a research team including Penn State engineers to enhance scientists’ understanding of global precipitation. The team used Blue Waters to tackle the problem of large gaps in precipitation data for large parts of the world. The goal is to help scientists and researchers move toward an integrated global water cycle observatory.

Other advances using new and enhanced data sources, including GIS, advanced satellite and emerging sensor technologies were made to aid research into other aspects of climate change. From an NCAR-led project to predict air pollution to others, the global climate change picture is filling in rapidly.

Manufacturing and Heavy Industry

planetcityManufacturing has been a notable target of investments on both the vendor and research fronts in 2013 as political rhetoric, changes in traditional manufacturing jobs, the need for STEM-based education to support a new manufacturing future and new technologies have all stepped up.

At the heart of industrial progress is a constant march toward more automation, efficiency and data-driven progress. As one might imagine, this offers significant opportunities for HPC modeling and simulation—not to mention for supercomputer-fed innovations in materials science, manufacturing processes and other areas.

Several facilities, including the Ohio Supercomputer Center, have lent helping hands to bring HPC to industry in 2013–and fresh efforts are springing up, including at Lawrence Livermore National Laboratory (LLNL). For instance, this year select industrial users had a crack at Vulcan, a 5 petafopper with 390,000 cores. With this, and a new host of commercial applications to tweak, LLNL is providing a much-needed slew of software and scaling support. The lab spent 2013 lining up participants to step to the high-core line to see how more compute horsepower can push modeling and simulation limits while solving specific scalability issues.

In July, companies interested in testing the latest in low-cost carbon fiber had a new opportunity to partner with the Department of Energy’s Carbon Fiber Technology Facility. The CFTF, operated by Oak Ridge National Laboratory as part of the Department’s Clean Energy Manufacturing Initiative, opened earlier this year to find ways to reduce carbon fiber production costs and to work with the private sector to stimulate widespread use of the strong, lightweight material.

Research was made into reality in a few interesting projects this year. For instance, a team of scientists and mathematicians at the DOE’s Lawrence Berkeley National Laboratory used their powerful number crunchers together with sophisticated algorithms to create cleaner combustion technologies to reduce the footprint of vehicles and machines. In another addition to cleaner manufacturing futures, scientists turned a lowly crustacean’s habits into a potentially beneficially process. The “Gribble” creature landed on the biofuel industry’s radar for its unique ability to digest wood in salty conditions. Now, researchers in the US and the UK are putting the University of Tennessee’s Kraken supercomputer to work modeling an enzyme in the Gribble’s gut, which could unlock the key to developing better industrial enzymes in the future.

Another notable story related to industry came from Oak Ridge National Lab, where researchers noted the importance of big rig trucks—a backbone to industry supply chains and product delivery. Most trucks only get about 6 miles to the gallon and altogether they emit about 423 million pounds of CO2 into the atmosphere each year. South Carolina-based BMI Corp. partnered with researchers at Oak Ridge National Laboratory (ORNL) to develop the SmartTruck UnderTray System, “a set of integrated aerodynamic fairings that improve the aerodynamics of 18-wheeler (Class 8) long-haul trucks.” After installation, the typical big rig can expect to achieve a fuel savings of between 7 and 12 percent, amounting to $5,000 annual savings in fuel costs.

CONTINUE with more of 2013’s progress… >>>

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

San Diego Supercomputer Center to Welcome ‘Expanse’ Supercomputer in 2020

July 18, 2019

With a $10 million dollar award from the National Science Foundation, San Diego Supercomputer Center (SDSC) at the University of California San Diego is procuring a new supercomputer, called Expanse, to be deployed next Read more…

By Staff report

Informing Designs of Safer, More Efficient Aircraft with Exascale Computing

July 18, 2019

During the process of designing an aircraft, aeronautical engineers must perform predictive simulations to understand how airflow around the plane impacts flight characteristics. However, modeling the complexities and su Read more…

By Rob Johnson

How Fast is Your Rubik Solver; This One’s Probably Faster

July 18, 2019

In the race to solve Rubik’s Cube, the time-to-finish keeps shrinking. This year Philipp Weyer from Germany won the 10th World Cube Association (WCA) Championship held in Melbourne, Australia, with a 6.74-second perfo Read more…

By John Russell

HPE Extreme Performance Solutions

Bring the Combined Power of HPC and AI to Your Business Transformation

A growing number of commercial businesses are implementing HPC solutions to derive actionable business insights, to run higher performance applications and to gain a competitive advantage. Read more…

IBM Accelerated Insights

Smarter Technology Revs Up Red Bull Racing

In 21st century business, companies that effectively leverage their information resources – thrive. As it turns out, the same is true in Formula One racing. Read more…

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated more efforts (academic, government, and commercial) but whose Read more…

By John Russell

Informing Designs of Safer, More Efficient Aircraft with Exascale Computing

July 18, 2019

During the process of designing an aircraft, aeronautical engineers must perform predictive simulations to understand how airflow around the plane impacts fligh Read more…

By Rob Johnson

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated mo Read more…

By John Russell

Goonhilly Unveils New Immersion-Cooled Platform, Doubles Down on Sustainability Mission

July 16, 2019

Goonhilly Earth Station has opened its new datacenter – an enhancement to its existing tier 3 facility – in Cornwall, England, touting an ambitious commitme Read more…

By Oliver Peckham

ISC19 Cluster Competition: Application Results, Finally!

July 15, 2019

Our exhaustive coverage of the ISC19 Student Cluster Competition continues as we discuss the application scores below. While the scores were typically high, som Read more…

By Dan Olds

Nvidia Expands DGX-Ready AI Program to 19 Countries

July 11, 2019

Nvidia’s DGX-Ready Data Center Program, announced in January and designed to provide colo and public cloud-like options to access the company’s GPU-powered Read more…

By Doug Black

Argonne Team Makes Record Globus File Transfer

July 10, 2019

A team of scientists at Argonne National Laboratory has broken a data transfer record by moving a staggering 2.9 petabytes of data for a research project.  The data – from three large cosmological simulations – was generated and stored on the Summit supercomputer at the Oak Ridge Leadership Computing Facility (OLCF)... Read more…

By Oliver Peckham

Nvidia, Google Tie in Second MLPerf Training ‘At-Scale’ Round

July 10, 2019

Results for the second round of the AI benchmarking suite known as MLPerf were published today with Google Cloud and Nvidia each picking up three wins in the at Read more…

By Tiffany Trader

Applied Materials Embedding New Memory Technologies in Chips

July 9, 2019

Applied Materials, the $17 billion Santa Clara-based materials engineering company for the semiconductor industry, today announced manufacturing systems enablin Read more…

By Doug Black

High Performance (Potato) Chips

May 5, 2006

In this article, we focus on how Procter & Gamble is using high performance computing to create some common, everyday supermarket products. Tom Lange, a 27-year veteran of the company, tells us how P&G models products, processes and production systems for the betterment of consumer package goods. Read more…

By Michael Feldman

Cray, AMD to Extend DOE’s Exascale Frontier

May 7, 2019

Cray and AMD are coming back to Oak Ridge National Laboratory to partner on the world’s largest and most expensive supercomputer. The Department of Energy’s Read more…

By Tiffany Trader

Graphene Surprises Again, This Time for Quantum Computing

May 8, 2019

Graphene is fascinating stuff with promise for use in a seeming endless number of applications. This month researchers from the University of Vienna and Institu Read more…

By John Russell

AMD Verifies Its Largest 7nm Chip Design in Ten Hours

June 5, 2019

AMD announced last week that its engineers had successfully executed the first physical verification of its largest 7nm chip design – in just ten hours. The AMD Radeon Instinct Vega20 – which boasts 13.2 billion transistors – was tested using a TSMC-certified Calibre nmDRC software platform from Mentor. Read more…

By Oliver Peckham

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

Deep Learning Competitors Stalk Nvidia

May 14, 2019

There is no shortage of processing architectures emerging to accelerate deep learning workloads, with two more options emerging this week to challenge GPU leader Nvidia. First, Intel researchers claimed a new deep learning record for image classification on the ResNet-50 convolutional neural network. Separately, Israeli AI chip startup Hailo.ai... Read more…

By George Leopold

Nvidia Embraces Arm, Declares Intent to Accelerate All CPU Architectures

June 17, 2019

As the Top500 list was being announced at ISC in Frankfurt today with an upgraded petascale Arm supercomputer in the top third of the list, Nvidia announced its Read more…

By Tiffany Trader

Top500 Purely Petaflops; US Maintains Performance Lead

June 17, 2019

With the kick-off of the International Supercomputing Conference (ISC) in Frankfurt this morning, the 53rd Top500 list made its debut, and this one's for petafl Read more…

By Tiffany Trader

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Launches Cascade Lake Xeons with Up to 56 Cores

April 2, 2019

At Intel's Data-Centric Innovation Day in San Francisco (April 2), the company unveiled its second-generation Xeon Scalable (Cascade Lake) family and debuted it Read more…

By Tiffany Trader

Cray – and the Cray Brand – to Be Positioned at Tip of HPE’s HPC Spear

May 22, 2019

More so than with most acquisitions of this kind, HPE’s purchase of Cray for $1.3 billion, announced last week, seems to have elements of that overused, often Read more…

By Doug Black and Tiffany Trader

A Behind-the-Scenes Look at the Hardware That Powered the Black Hole Image

June 24, 2019

Two months ago, the first-ever image of a black hole took the internet by storm. A team of scientists took years to produce and verify the striking image – an Read more…

By Oliver Peckham

Announcing four new HPC capabilities in Google Cloud Platform

April 15, 2019

When you’re running compute-bound or memory-bound applications for high performance computing or large, data-dependent machine learning training workloads on Read more…

By Wyatt Gorman, HPC Specialist, Google Cloud; Brad Calder, VP of Engineering, Google Cloud; Bart Sano, VP of Platforms, Google Cloud

Chinese Company Sugon Placed on US ‘Entity List’ After Strong Showing at International Supercomputing Conference

June 26, 2019

After more than a decade of advancing its supercomputing prowess, operating the world’s most powerful supercomputer from June 2013 to June 2018, China is keep Read more…

By Tiffany Trader

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

In Wake of Nvidia-Mellanox: Xilinx to Acquire Solarflare

April 25, 2019

With echoes of Nvidia’s recent acquisition of Mellanox, FPGA maker Xilinx has announced a definitive agreement to acquire Solarflare Communications, provider Read more…

By Doug Black

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This