The Top Supercomputing Led Discoveries of 2013

By Nicole Hemsoth

January 2, 2014

2013 has been an incredible year for the entire ecosystem around supercomputing; from vendors pushing new technologies to boost performance, capacity, and programmability to researchers turning over new insights with fresh techniques. While exascale has taken more of a backseat than we might have predicted at the year’s end of 2010, there are plenty of signs that production HPC environments are blazing plenty of new trails.

As the calendar flips into 2014, we wanted to cast a backward glance at select discoveries and opportunities made possible by the fastest systems in the world and the people who run them—all pulled from our news archives of the past year along some important thematic lines.

We’ve pulled over 30 examples of how supercomputers are set to change the world in 2014 and beyond and while this list is anything but exhaustive, it does show how key segments in research and industry are evolving with HPC.

In a Galaxy Far, Far Away…

galaxyOne of the most famous “showcase” areas where HPC finds a mainstream shine is when news breaks of startling answers emerge to questions as big as “where do we come from” and “what is the universe made of.” As one might expect, 2013 was a banner year for discoveries that reached well beyond earth.

This year, Kraken at the National Institute for Computational Sciences (NICS) at the University of Tennessee Knoxville, addressed some large-scale, stubborn classical physics problems with revolutionary protoplanetary disk research while another massive system, the Opteron-powered “Hopper” Cray XE6 system that is part of the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Lab lit up the comos.

Back in October, a team of scientists from ETH Zurich and the University of Leeds solved a 300-year-old riddle about the nature of the Earth’s rotation. Using the Cray XE6 supercomputer “Monte Rosa” installed at CSCS, the researchers uncovered the reason for the gradual westward movement of the Earth’s magnetic field.

This past November, astrophysics researchers at UC San Diego advanced their understanding of star formation with the help of some major computational resources from San Diego Supercomputer Center (SDSC) at the UC San Diego and the National Institute for Computational Science (NICS) at Oak Ridge National Laboratory.

Researchers at the Universities of Leeds and Chicago harnessed supercomputing power to uncover an important mechanism behind the generation of astrophysical magnetic fields such as that of the sun. And researchers at the Institute for Computational Cosmology (ICC), are using HPC to model phenomena ranging from solar flares to the formation of galaxies. Others, including an NSF-supported team from the University of Iowa used 2013 and plenty of supercomputing might to measure space turbulence directly for the first time in the laboratory, allowing the world to finally see the dynamics behind it.

Some spent some of their year bolstering current resources to aid in new discoveries. For instance, a new 60 teraflops supercomputer and 1 petabyte high speed storage system recently installed on the campus of the University of California at Santa Cruz, which will give astrophysicists at the college the computational and storage headroom they need to model the heavens like never before.

Medical Discovery and Research

CONNECTOME3Ever-growing datasets fed by information from a widening pool of sources pushed medical research even further into supercomputing territory this year. The modeling and simulation requirements of viruses, genomic data, organ and

Supercomputing’s reach into the human brain was one of the most widely-cited research items in medical circles in 2013. This year the Human Brain Project was carried forward by a host of supporting institutions and systems and the topic was the subject of several lectures and keynotes that are worthy of reviewing before a fresh year begins.

Cancer research is another important field that is increasingly reliant on powerful systems. For instance, this year, researchers at Emory University reported a significant improvement in their ability to analyze and understand changes of cancer tumors over time thanks to HPC work done on a Keeneland Project supercomputer. Analysis of high resolution cancer tumor images that used to take weeks can now be completed in a matter of minutes on the hybrid GPU-CPU system.

Complex viruses, including HIV/AIDS were also the subject of a great deal of supercomputer-powered research this year. Researchers at the University of Illinois Urbana-Champaign have successfully modeled the interior of the HIV-1 virus using the Blue Waters system, opening the door to new antiretroviral drugs that target HIV-1, the virus that causes AIDS.

Other viruses, including malaria, were the target of additional innovative research. Pittsburgh Supercomputing Center (PSC) and the University of Notre Dame received up to $1.6 million in funding from the Bill & Melinda Gates Foundation to develop a system of computers and software for the Vector Ecology and Control Network (VECNet), an international consortium to eradicate malaria. The new VECNet Cyber-Infrastructure Project (CI) will support VECNet’s effort to unite research, industrial and public policy efforts to attack one of the worst diseases in the developing world in more effective, economical ways.

Armed with vaccines, however, viruses can be stopped in their tracks, assuming the delivery of such life-saving measures is done effectively. A supercomputer simulation of the West African nation of Niger showed that improving transportation as well could improve vaccine availability among children and mothers from roughly 50 percent to more than 90 percent.

On another front, researchers came a step closer to understanding strokes this year. A team from UC Berkeley and the University of California San Diego (UCSD) used the supercomputing resources of the National Energy Research Scientific Computing Center (NERSC) to model the efficacy of microbubbles and high intensity focused ultrasound (HIFU) for breaking up stroke-causing clots.

Other noteworthy advances powered by world-class systems emerged this year in medical areas as diverse as autism research and pushing new boundaries in medical physics. As ever-more computing capacity comes online in the coming year, we expect the diverse medical field to produce stunning stories and discoveries in 2014.

Climate Change and Earth Science

earfThe volumes of scientific support are growing in support of climate change, a process which has been powered by massive simulations, including those that put the changes in the context of global shifts over vast lengths of time.

For example, HPCwire’s Tiffany Trader wrote back in September on the “Climate Time Machine”, which relies on a database of extreme global weather events from 1871 to the present day, culled from newspaper weather reports, measurements on land and sea for the first decades along with more modern data. The team of top climate scientists fed the data into powerful supercomputers, including those at NERSC and the Oak Ridge Leadership Computing Facility in Tennessee, to create a virtual climate time machine. A sizable portion (12 percent) of the supercomputing resources at NERSC is allocated to global climate change research. That’s nearly 150 million processor-hours of highly-tuned computational might focused on an issue that is critical to humanity’s future.

New systems emerged to tackle climate change data. For instance, Berkeley Lab’s Green Flash is a specialized supercomputer designed to showcase a way to perform more detailed climate modeling. The system uses customized Tensilica-based processors, similar to those found in iPhones, and communication-minimizing algorithms that cut down on the movement of data, to model the movement of clouds around the earth at a higher resolution than was previously possible, without consuming huge amounts of electricity.

This year large-scale systems, like Blue Waters at NCSA, were used by a research team including Penn State engineers to enhance scientists’ understanding of global precipitation. The team used Blue Waters to tackle the problem of large gaps in precipitation data for large parts of the world. The goal is to help scientists and researchers move toward an integrated global water cycle observatory.

Other advances using new and enhanced data sources, including GIS, advanced satellite and emerging sensor technologies were made to aid research into other aspects of climate change. From an NCAR-led project to predict air pollution to others, the global climate change picture is filling in rapidly.

Manufacturing and Heavy Industry

planetcityManufacturing has been a notable target of investments on both the vendor and research fronts in 2013 as political rhetoric, changes in traditional manufacturing jobs, the need for STEM-based education to support a new manufacturing future and new technologies have all stepped up.

At the heart of industrial progress is a constant march toward more automation, efficiency and data-driven progress. As one might imagine, this offers significant opportunities for HPC modeling and simulation—not to mention for supercomputer-fed innovations in materials science, manufacturing processes and other areas.

Several facilities, including the Ohio Supercomputer Center, have lent helping hands to bring HPC to industry in 2013–and fresh efforts are springing up, including at Lawrence Livermore National Laboratory (LLNL). For instance, this year select industrial users had a crack at Vulcan, a 5 petafopper with 390,000 cores. With this, and a new host of commercial applications to tweak, LLNL is providing a much-needed slew of software and scaling support. The lab spent 2013 lining up participants to step to the high-core line to see how more compute horsepower can push modeling and simulation limits while solving specific scalability issues.

In July, companies interested in testing the latest in low-cost carbon fiber had a new opportunity to partner with the Department of Energy’s Carbon Fiber Technology Facility. The CFTF, operated by Oak Ridge National Laboratory as part of the Department’s Clean Energy Manufacturing Initiative, opened earlier this year to find ways to reduce carbon fiber production costs and to work with the private sector to stimulate widespread use of the strong, lightweight material.

Research was made into reality in a few interesting projects this year. For instance, a team of scientists and mathematicians at the DOE’s Lawrence Berkeley National Laboratory used their powerful number crunchers together with sophisticated algorithms to create cleaner combustion technologies to reduce the footprint of vehicles and machines. In another addition to cleaner manufacturing futures, scientists turned a lowly crustacean’s habits into a potentially beneficially process. The “Gribble” creature landed on the biofuel industry’s radar for its unique ability to digest wood in salty conditions. Now, researchers in the US and the UK are putting the University of Tennessee’s Kraken supercomputer to work modeling an enzyme in the Gribble’s gut, which could unlock the key to developing better industrial enzymes in the future.

Another notable story related to industry came from Oak Ridge National Lab, where researchers noted the importance of big rig trucks—a backbone to industry supply chains and product delivery. Most trucks only get about 6 miles to the gallon and altogether they emit about 423 million pounds of CO2 into the atmosphere each year. South Carolina-based BMI Corp. partnered with researchers at Oak Ridge National Laboratory (ORNL) to develop the SmartTruck UnderTray System, “a set of integrated aerodynamic fairings that improve the aerodynamics of 18-wheeler (Class 8) long-haul trucks.” After installation, the typical big rig can expect to achieve a fuel savings of between 7 and 12 percent, amounting to $5,000 annual savings in fuel costs.

CONTINUE with more of 2013’s progress… >>>

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Nvidia Shuts Out RISC-V Software Support for GPUs 

September 23, 2022

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

Microsoft Closes Confidential Computing Loop with AMD’s Milan Chip

September 22, 2022

Microsoft shared details on how it uses an AMD technology to secure artificial intelligence as it builds out a secure AI infrastructure in its Azure cloud service. Microsoft has a strong relationship with Nvidia, but is also working with AMD's Epyc chips (including the new 3D VCache series), MI Instinct accelerators, and also... Read more…

Nvidia Introduces New Ada Lovelace GPU Architecture, OVX Systems, Omniverse Cloud

September 20, 2022

In his GTC keynote today, Nvidia CEO Jensen Huang launched another new Nvidia GPU architecture: Ada Lovelace, named for the legendary mathematician regarded as the first computer programmer. The company also announced tw Read more…

Nvidia’s Hopper GPUs Enter ‘Full Production,’ DGXs Delayed Until Q1

September 20, 2022

Just about six months ago, Nvidia’s spring GTC event saw the announcement of its hotly anticipated Hopper GPU architecture. Now, the GPU giant is announcing that Hopper-generation GPUs (which promise greater energy eff Read more…

NeMo LLM Service: Nvidia’s First Cloud Service Makes AI Less Vague

September 20, 2022

Nvidia is trying to uncomplicate AI with a cloud service that makes AI and its many forms of computing less vague and more conversational. The NeMo LLM service, which Nvidia called its first cloud service, adds a layer of intelligence and interactivity... Read more…

AWS Solution Channel

Shutterstock 1194728515

Simulating 44-Qubit quantum circuits using AWS ParallelCluster

Dr. Fabio Baruffa, Sr. HPC & QC Solutions Architect
Dr. Pavel Lougovski, Pr. QC Research Scientist
Tyson Jones, Doctoral researcher, University of Oxford

Introduction

Currently, an enormous effort is underway to develop quantum computing hardware capable of scaling to hundreds, thousands, and even millions of physical (non-error-corrected) qubits. Read more…

Microsoft/NVIDIA Solution Channel

Shutterstock 1166887495

Improving Insurance Fraud Detection using AI Running on Cloud-based GPU-Accelerated Systems

Insurance is a highly regulated industry that is evolving as the industry faces changing customer expectations, massive amounts of data, and increased regulations. A major issue facing the industry is tracking insurance fraud. Read more…

Nvidia Targets Computers for Robots in the Surgery Rooms

September 20, 2022

Nvidia is laying the groundwork for a future in which humans and robots will be collaborators in the surgery rooms at hospitals. The company announced a computer called IGX for Medical Devices, which will be populated in robots, image scanners and other computers and medical devices involved in patient care close to the point... Read more…

Nvidia Shuts Out RISC-V Software Support for GPUs 

September 23, 2022

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

Nvidia Introduces New Ada Lovelace GPU Architecture, OVX Systems, Omniverse Cloud

September 20, 2022

In his GTC keynote today, Nvidia CEO Jensen Huang launched another new Nvidia GPU architecture: Ada Lovelace, named for the legendary mathematician regarded as Read more…

Nvidia’s Hopper GPUs Enter ‘Full Production,’ DGXs Delayed Until Q1

September 20, 2022

Just about six months ago, Nvidia’s spring GTC event saw the announcement of its hotly anticipated Hopper GPU architecture. Now, the GPU giant is announcing t Read more…

NeMo LLM Service: Nvidia’s First Cloud Service Makes AI Less Vague

September 20, 2022

Nvidia is trying to uncomplicate AI with a cloud service that makes AI and its many forms of computing less vague and more conversational. The NeMo LLM service, which Nvidia called its first cloud service, adds a layer of intelligence and interactivity... Read more…

Nvidia Targets Computers for Robots in the Surgery Rooms

September 20, 2022

Nvidia is laying the groundwork for a future in which humans and robots will be collaborators in the surgery rooms at hospitals. The company announced a computer called IGX for Medical Devices, which will be populated in robots, image scanners and other computers and medical devices involved in patient care close to the point... Read more…

Survey Results: PsiQuantum, ORNL, and D-Wave Tackle Benchmarking, Networking, and More

September 19, 2022

The are many issues in quantum computing today – among the more pressing are benchmarking, networking and development of hybrid classical-quantum approaches. Read more…

HPC + AI Wall Street to Feature ‘Spooky’ Science for Financial Services

September 18, 2022

Albert Einstein famously described quantum mechanics as "spooky action at a distance" due to the non-intuitive nature of superposition and quantum entangled par Read more…

Analog Chips Find a New Lease of Life in Artificial Intelligence

September 17, 2022

The need for speed is a hot topic among participants at this week’s AI Hardware Summit – larger AI language models, faster chips and more bandwidth for AI machines to make accurate predictions. But some hardware startups are taking a throwback approach for AI computing to counter the more-is-better... Read more…

AWS Takes the Short and Long View of Quantum Computing

August 30, 2022

It is perhaps not surprising that the big cloud providers – a poor term really – have jumped into quantum computing. Amazon, Microsoft Azure, Google, and th Read more…

The Final Frontier: US Has Its First Exascale Supercomputer

May 30, 2022

In April 2018, the U.S. Department of Energy announced plans to procure a trio of exascale supercomputers at a total cost of up to $1.8 billion dollars. Over the ensuing four years, many announcements were made, many deadlines were missed, and a pandemic threw the world into disarray. Now, at long last, HPE and Oak Ridge National Laboratory (ORNL) have announced that the first of those... Read more…

US Senate Passes CHIPS Act Temperature Check, but Challenges Linger

July 19, 2022

The U.S. Senate on Tuesday passed a major hurdle that will open up close to $52 billion in grants for the semiconductor industry to boost manufacturing, supply chain and research and development. U.S. senators voted 64-34 in favor of advancing the CHIPS Act, which sets the stage for the final consideration... Read more…

Nvidia Shuts Out RISC-V Software Support for GPUs 

September 23, 2022

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

Top500: Exascale Is Officially Here with Debut of Frontier

May 30, 2022

The 59th installment of the Top500 list, issued today from ISC 2022 in Hamburg, Germany, officially marks a new era in supercomputing with the debut of the first-ever exascale system on the list. Frontier, deployed at the Department of Energy’s Oak Ridge National Laboratory, achieved 1.102 exaflops in its fastest High Performance Linpack run, which was completed... Read more…

Chinese Startup Biren Details BR100 GPU

August 22, 2022

Amid the high-performance GPU turf tussle between AMD and Nvidia (and soon, Intel), a new, China-based player is emerging: Biren Technology, founded in 2019 and headquartered in Shanghai. At Hot Chips 34, Biren co-founder and president Lingjie Xu and Biren CTO Mike Hong took the (virtual) stage to detail the company’s inaugural product: the Biren BR100 general-purpose GPU (GPGPU). “It is my honor to present... Read more…

Newly-Observed Higgs Mode Holds Promise in Quantum Computing

June 8, 2022

The first-ever appearance of a previously undetectable quantum excitation known as the axial Higgs mode – exciting in its own right – also holds promise for developing and manipulating higher temperature quantum materials... Read more…

AMD’s MI300 APUs to Power Exascale El Capitan Supercomputer

June 21, 2022

Additional details of the architecture of the exascale El Capitan supercomputer were disclosed today by Lawrence Livermore National Laboratory’s (LLNL) Terri Read more…

Leading Solution Providers

Contributors

Tesla Bulks Up Its GPU-Powered AI Super – Is Dojo Next?

August 16, 2022

Tesla has revealed that its biggest in-house AI supercomputer – which we wrote about last year – now has a total of 7,360 A100 GPUs, a nearly 28 percent uplift from its previous total of 5,760 GPUs. That’s enough GPU oomph for a top seven spot on the Top500, although the tech company best known for its electric vehicles has not publicly benchmarked the system. If it had, it would... Read more…

Exclusive Inside Look at First US Exascale Supercomputer

July 1, 2022

HPCwire takes you inside the Frontier datacenter at DOE's Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tenn., for an interview with Frontier Project Direc Read more…

AMD Opens Up Chip Design to the Outside for Custom Future

June 15, 2022

AMD is getting personal with chips as it sets sail to make products more to the liking of its customers. The chipmaker detailed a modular chip future in which customers can mix and match non-AMD processors in a custom chip package. "We are focused on making it easier to implement chips with more flexibility," said Mark Papermaster, chief technology officer at AMD during the analyst day meeting late last week. Read more…

Intel Reiterates Plans to Merge CPU, GPU High-performance Chip Roadmaps

May 31, 2022

Intel reiterated it is well on its way to merging its roadmap of high-performance CPUs and GPUs as it shifts over to newer manufacturing processes and packaging technologies in the coming years. The company is merging the CPU and GPU lineups into a chip (codenamed Falcon Shores) which Intel has dubbed an XPU. Falcon Shores... Read more…

Nvidia, Intel to Power Atos-Built MareNostrum 5 Supercomputer

June 16, 2022

The long-troubled, hotly anticipated MareNostrum 5 supercomputer finally has a vendor: Atos, which will be supplying a system that includes both Nvidia and Inte Read more…

UCIe Consortium Incorporates, Nvidia and Alibaba Round Out Board

August 2, 2022

The Universal Chiplet Interconnect Express (UCIe) consortium is moving ahead with its effort to standardize a universal interconnect at the package level. The c Read more…

Using Exascale Supercomputers to Make Clean Fusion Energy Possible

September 2, 2022

Fusion, the nuclear reaction that powers the Sun and the stars, has incredible potential as a source of safe, carbon-free and essentially limitless energy. But Read more…

Is Time Running Out for Compromise on America COMPETES/USICA Act?

June 22, 2022

You may recall that efforts proposed in 2020 to remake the National Science Foundation (Endless Frontier Act) have since expanded and morphed into two gigantic bills, the America COMPETES Act in the U.S. House of Representatives and the U.S. Innovation and Competition Act in the U.S. Senate. So far, efforts to reconcile the two pieces of legislation have snagged and recent reports... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire