The Top Supercomputing Led Discoveries of 2013

By Nicole Hemsoth

January 2, 2014

Oil and Gas, Renewable Energy

natureOil and gas supercomputers are finding their way into the upper ranks of the Top500 with many of the major companies retrofitting their existing systems or cutting the ribbon on new centers.

Back in October, BP announced that it has opened a new facility in Houston, Texas, that is designed house the “world’s largest supercomputer for commercial research.” The Center for High-Performance Computing is part of BP’s five-year, $100 million investment in computing.

As we reported this year, BP’s newest supercomputer was built by HP and Intel. With 2.2 petaflops of data-crunching potential, the new supercomputer has almost twice as much computing power as BP’s previous machine. The new supercomputer also comes with 1,000 TB of total memory and 23.5 petabytes of disk space (which is equivalent of over 40,000 average laptop computers).

Last December, HPCwire learned that BP planned to derive its FLOPs from a CPU-only strategy. The new system would employ about 67,000 CPUs, but no GPUs or Phis. At the time, Keith Gray, BP’s HPC center manager, told HPCwire that the British firm wasn’t ready to make the leap to heterogeneous computing. “We continue to test accelerators,” he shared in an email, “but have not built a strong business case for our complete application base.”

Also in oil and gas news this year was a new system from SGI tweaked from its ICE X HPC system for French oil and gas giant, Total. The approximately $78 million ICE X based system will clock in about ninth place on the Top 500 list as it stands now, ringing in at around 2.3 petaflops, at least in terms of its peak Linpack calculations. SGI expects it to pull the title of top commercial system this year, which is probably not an unreasonable assumption given its predicted performance across its 110,592 Xeon E5-2670 cores and 442 TB of memory that is split on this distributed-memory system.

On the specs front, SGI points to the data management capabilities, consisting of 7 PB of storage, including its native InfiniteStorage disk arrays (17,000 of them to be exact) and their DMF tiered storage virtualization backed by integrated Lustre.

In a partnership with Sandia National Lab, GE Global Research, the technology development arm of the General Electric Company, announced research that could significantly impact the design of future wind turbine blades. Utilizing the power of high-performance computing (HPC) to perform complex calculations, GE engineers have overcome previous design constraints, allowing them to begin exploring ways to design reengineered wind blades that are low-noise and more prolific power-producers.

Back in May, the Colorado School of Mines revealed its new 155 teraflop supercomputer dubbed “BlueM” which is designed to allow researchers to run large simulations in support of the university’s core research areas while operating on the forefront of algorithm development using a powerful hybrid system. The system will be housed at the National Center for Atmospheric Research (NCAR) in a major new collaboration between the two organizations.

Earthquakes, Tornadoes and Natural Disasters

tornadoWhizzing through 213 trillion calculations per second, newly upgraded supercomputers of NOAA’s National Weather Service are now more than twice as fast in processing sophisticated computer models to provide more accurate forecasts further out in time. Nicknamed “Tide,” the supercomputer in Reston, Va., and its Orlando-based backup named “Gyre,” are operating with 213 teraflops (TF) — up from the 90 TF with the computers that preceded them. This higher processing power allows the National Weather Service to implement an enhanced Hurricane Weather Research and Forecasting (HWRF) model.

The MET Office, the UK’s National Weather Service, relies on more than 10 million weather observations from sites around the world, a sophisticated atmospheric model and a £30 million IBM supercomputer to generate 3,000 tailored forecasts every day. Thanks to this advanced forecasting system, climate scientists were able to predict the size and path of Monday’s St. Jude’s Day storm four days before it formed.

University of Oklahoma associate professor Amy McGovern is working to revolutionize tornado and storm prediction. McGovern’s ambitious tornado modeling and simulation project seeks to explain why some storms generate tornadoes while others don’t. The research is giving birth to new techniques for identifying the likely path of twisters through both space and time.

The deadly EF5 tornado that hit Moore, Oklahoma on May 20 was unique in several ways. Not only was it one of the strongest twisters ever recorded, but forecasters were able to issue a tornado warning 36 minutes in advance, saving lives. As our own Alex Woodie reported, playing a part in that forecast was a Cray supercomputer at the National Institute for Computational Sciences (NICS). Darter, which has nearly 12,000 Intel Sandy Bridge cores and 250 teraflops of peak capacity, was used to calculate the detailed Storm-scale Ensemble Forecasts (SSEF) that regional weather forecasters–such as the National Weather Service office in Norman, Oklahoma that issued the 36-minute, life-saving warning on May 20–rely on to predict tornados and other severe weather events.

Under the sponsorship of the National Nuclear Security Administration’s Office of Defense Nuclear Nonproliferation R&D, Sandia National Laboratories and Los Alamos National Laboratory have partnered to develop a 3-D model of the Earth’s mantle and crust called SALSA3D, or Sandia-Los Alamos 3D. The purpose of this model is to assist the US Air Force and the international Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) in Vienna, Austria, more accurately locate all types of explosions.

Back in June, SGI, a leader in technical computing and Big Data, has announced its NVIDIA Tesla GPU-powered SGI Rackable servers have been deployed in the Department of Geosciences at Princeton University to drive next-generation earthquake research. The department will utilize five main open-source software packages and is leveraging NVIDIA GPUs for the SPECFEM3D ‘Sesame’ application, which simulates seismic wave propagation on regional and global scales.

Military and Defense

findata2We’ll leave the discussions about the NSA and mass surveillance for other publications, although it’s hard to imagine all of the system and software innovations that are going toward those efforts now—and did in 2013. While that topic has grabbed the mainstream ear this year, there are some noteworthy developments in national security and defense that were lost in the headlines.

Geospatial intelligence data collection methods are increasingly complex—and accordingly, the amount and quality of the data they produce are opening new opportunities for governments to exploit for military and defense purposes. GPU giant NVIDIA reached out to this growing area this year by offering up a platform for the geospatial intelligence community with its GeoInt Accelerator. The goal of the packaged offering is to provide an integrated suite of tools for geospatial intelligence analysts as well as that community’s specialty developers that are primed to take advantage of GPU speedups.

In addition to offering a number of key applications relevant to this community (from situational awareness, satellite imagery and object detection software) they’ve also pulled together a number of relevant libraries for defense contractors and integrators to use for building GPU-accelerated applications, including their own Performance Primitives, MATLAB Imaging Toolkit, CUDA FFT, Accelereyes’ ArrayFire and other contents.

Centers that take advantage of military and defense data are also growing. For instance, the Air Force Research Laboratory Supercomputing Resource Center (DSRC) has a new addition to its fleet of supers via its new SGI ICE X system called Spirit, which will be housed at the Wright-Patterson Air Force base in Dayton, Ohio. The top 20-level system, which is capable of 1.4 petaflops, will support various research, development, testing and evaluation projects, particularly on the aircraft and ship design fronts. Spirit boasts 4,608 nodes and 73,728 Xeon cores humming at 2.6 GHz, as well as 146 TB of memory and 4.6 PB of disk space.

The US Army Research Laboratory (ARL) took the wraps off a new supercomputing center in 2013—this center is set to advance the service’s war-fighting capability. Two HPC systems have been installed at the ARL Supercomputing Center at the Aberdeen Proving Grounds, which was the home of ENIAC, the world’s first general-purpose electronic computer. It goes without saying that the two iDataPlex systems at the ARL Supercomputing Center have vastly more processing capacity than ENIAC, which was installed by the Army at APG in 1946 to do ballistics calculations. Whereas the new Army’s new supercomputers have the capability to process trillions of floating-point operations per second, or teraflops, ENIAC could manage hundreds per second.

As Alex Woodie reported, “Army scientists and engineers will use the supercomputers to model and evaluate a wide range of soldier- and combat-vehicle-related materials in advance of actual manufacturing. This will accelerate product development by allowing the Army to invest the time and money for actual physical testing for only the products showing the highest promise through modeling.”

Financial Markets

finmkts2As we see each time HPC on Wall Street happens during the year, and of course throughout the news cycle, the financial services sector as a whole is one of the first adopters—and among the greatest commercial innovators when it comes to HPC technologies.

Of course, this doesn’t mean that computers are always the worthy allies markets need them to be. Recall that there were some glitches in 2013, which led us to speculate on what the future of reliability will look like going forward.

There were some striking technology developments this year for the sector, despite some of the more mainstream controversies about placing our utter faith in the “hands” of machines. For instance, London-based bank HSBC demonstrated that it may be able to save millions of dollars in computer costs by moving a portfolio pricing process from a grid of Intel Xeon processors to NVIDIA Tesla GPUs, reports Xcelerit, the company that helped the bank with its experiment by providing CUDA programming tools.

In April, Xcelerit reported on the promising experiment conducted by the Quantitative Risk and Valuation Group (QRVG) at HSBC, which reported more than $2.6 trillion in assets in 2012. The QRVG is responsible for running Credit Value Adjustment (CVA) processes every night over HSBC’s entire portfolio to compute its risk exposure, per Basel III requirements.

While we’re on the topic of financial services and the new year, make sure to take a look at what’s cooking for the 2014 HPC for Wall Street event.

Looking Ahead to 2014

We can expect to see these same general areas in next year’s summary, but the performance, capability and programmability will hopefully continue to improve, leading to more insights. Outside of these broader industry and research segments, we look forward to delivering more interesting topics that don’t fit the mold (like this year’s stories about GPUs and the mysteries of flying snakes or tracking the first dinosaur steps).

Thanks for joining us for another exciting year!

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Russian Supercomputer Employed to Develop COVID-19 Treatment

March 31, 2020

From Summit to [email protected], global supercomputing is continuing to mobilize against the coronavirus pandemic by crunching massive problems like epidemiology, therapeutic development and vaccine development. The latest a Read more…

By Staff report

What’s New in HPC Research: Supersonic Jets, Skin Modeling, Astrophysics & More

March 31, 2020

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Pandemic ‘Wipes Out’ 2020 HPC Market Growth, Flat to 12% Drop Expected

March 31, 2020

As the world battles the still accelerating novel coronavirus, the HPC community has mounted a forceful response to the pandemic on many fronts. But these efforts won't inoculate the HPC industry from the economic effects of COVID-19. Market watcher Intersect360 Research has revised its 2020 forecast for HPC products and services, projecting... Read more…

By Tiffany Trader

LLNL Leverages Supercomputing to Identify COVID-19 Antibody Candidates

March 30, 2020

As COVID-19 sweeps the globe to devastating effect, supercomputers around the world are spinning up to fight back by working on diagnosis, epidemiology, treatment and vaccine development. Now, Lawrence Livermore National Read more…

By Staff report

Weather at Exascale: Load Balancing for Heterogeneous Systems

March 30, 2020

The first months of 2020 were dominated by weather and climate supercomputing news, with major announcements coming from the UK, the European Centre for Medium-Range Weather Forecasts and the U.S. National Oceanic and At Read more…

By Oliver Peckham

AWS Solution Channel

Amazon FSx for Lustre Update: Persistent Storage for Long-Term, High-Performance Workloads

Last year I wrote about Amazon FSx for Lustre and told you how our customers can use it to create pebibyte-scale, highly parallel POSIX-compliant file systems that serve thousands of simultaneous clients driving millions of IOPS (Input/Output Operations per Second) with sub-millisecond latency. Read more…

Q&A Part Two: ORNL’s Pooser on Progress in Quantum Communication

March 30, 2020

Quantum computing seems to get more than its fair share of attention compared to quantum communication. That’s despite the fact that quantum networking may be nearer to becoming a practical reality. In this second inst Read more…

By John Russell

Pandemic ‘Wipes Out’ 2020 HPC Market Growth, Flat to 12% Drop Expected

March 31, 2020

As the world battles the still accelerating novel coronavirus, the HPC community has mounted a forceful response to the pandemic on many fronts. But these efforts won't inoculate the HPC industry from the economic effects of COVID-19. Market watcher Intersect360 Research has revised its 2020 forecast for HPC products and services, projecting... Read more…

By Tiffany Trader

Weather at Exascale: Load Balancing for Heterogeneous Systems

March 30, 2020

The first months of 2020 were dominated by weather and climate supercomputing news, with major announcements coming from the UK, the European Centre for Medium- Read more…

By Oliver Peckham

Q&A Part Two: ORNL’s Pooser on Progress in Quantum Communication

March 30, 2020

Quantum computing seems to get more than its fair share of attention compared to quantum communication. That’s despite the fact that quantum networking may be Read more…

By John Russell

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Conversation: ANL’s Rick Stevens on DoE’s AI for Science Project

March 23, 2020

With release of the Department of Energy’s AI for Science report in late February, the effort to build a national AI program, modeled loosely on the U.S. Exascale Initiative, enters a new phase. Project leaders have already had early discussions with Congress... Read more…

By John Russell

Servers Headed to Junkyard Find 2nd Life Fighting Cancer in Clusters

March 20, 2020

Ottawa-based charitable organization Cancer Computer is on a mission to stamp out cancer and other life-threatening diseases, including coronavirus, by putting Read more…

By Tiffany Trader

Kubernetes and HPC Applications in Hybrid Cloud Environments – Part II

March 19, 2020

With the rise of cloud services, CIOs are recognizing that applications, middleware, and infrastructure running in various compute environments need a common management and operating model. Maintaining different application and middleware stacks on-premises and in cloud environments, by possibly using different specialized infrastructure and application... Read more…

By Daniel Gruber,Burak Yenier and Wolfgang Gentzsch, UberCloud

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, t Read more…

By John Russell

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

Tech Conferences Are Being Canceled Due to Coronavirus

March 3, 2020

Several conferences scheduled to take place in the coming weeks, including Nvidia’s GPU Technology Conference (GTC) and the Strata Data + AI conference, have Read more…

By Alex Woodie

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

Exascale Watch: El Capitan Will Use AMD CPUs & GPUs to Reach 2 Exaflops

March 4, 2020

HPE and its collaborators reported today that El Capitan, the forthcoming exascale supercomputer to be sited at Lawrence Livermore National Laboratory and serve Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

IBM Unveils Latest Achievements in AI Hardware

December 13, 2019

“The increased capabilities of contemporary AI models provide unprecedented recognition accuracy, but often at the expense of larger computational and energet Read more…

By Oliver Peckham

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

IBM Debuts IC922 Power Server for AI Inferencing and Data Management

January 28, 2020

IBM today launched a Power9-based inference server – the IC922 – that features up to six Nvidia T4 GPUs, PCIe Gen 4 and OpenCAPI connectivity, and can accom Read more…

By John Russell

University of Stuttgart Inaugurates ‘Hawk’ Supercomputer

February 20, 2020

This week, the new “Hawk” supercomputer was inaugurated in a ceremony at the High-Performance Computing Center of the University of Stuttgart (HLRS). Offici Read more…

By Staff report

Summit Joins the Fight Against the Coronavirus

March 6, 2020

With the coronavirus sweeping the globe, tech conferences and supply chains are being hit hard – but now, tech is hitting back. Oak Ridge National Laboratory Read more…

By Staff report

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This