The Top Supercomputing Led Discoveries of 2013

By Nicole Hemsoth

January 2, 2014

Oil and Gas, Renewable Energy

natureOil and gas supercomputers are finding their way into the upper ranks of the Top500 with many of the major companies retrofitting their existing systems or cutting the ribbon on new centers.

Back in October, BP announced that it has opened a new facility in Houston, Texas, that is designed house the “world’s largest supercomputer for commercial research.” The Center for High-Performance Computing is part of BP’s five-year, $100 million investment in computing.

As we reported this year, BP’s newest supercomputer was built by HP and Intel. With 2.2 petaflops of data-crunching potential, the new supercomputer has almost twice as much computing power as BP’s previous machine. The new supercomputer also comes with 1,000 TB of total memory and 23.5 petabytes of disk space (which is equivalent of over 40,000 average laptop computers).

Last December, HPCwire learned that BP planned to derive its FLOPs from a CPU-only strategy. The new system would employ about 67,000 CPUs, but no GPUs or Phis. At the time, Keith Gray, BP’s HPC center manager, told HPCwire that the British firm wasn’t ready to make the leap to heterogeneous computing. “We continue to test accelerators,” he shared in an email, “but have not built a strong business case for our complete application base.”

Also in oil and gas news this year was a new system from SGI tweaked from its ICE X HPC system for French oil and gas giant, Total. The approximately $78 million ICE X based system will clock in about ninth place on the Top 500 list as it stands now, ringing in at around 2.3 petaflops, at least in terms of its peak Linpack calculations. SGI expects it to pull the title of top commercial system this year, which is probably not an unreasonable assumption given its predicted performance across its 110,592 Xeon E5-2670 cores and 442 TB of memory that is split on this distributed-memory system.

On the specs front, SGI points to the data management capabilities, consisting of 7 PB of storage, including its native InfiniteStorage disk arrays (17,000 of them to be exact) and their DMF tiered storage virtualization backed by integrated Lustre.

In a partnership with Sandia National Lab, GE Global Research, the technology development arm of the General Electric Company, announced research that could significantly impact the design of future wind turbine blades. Utilizing the power of high-performance computing (HPC) to perform complex calculations, GE engineers have overcome previous design constraints, allowing them to begin exploring ways to design reengineered wind blades that are low-noise and more prolific power-producers.

Back in May, the Colorado School of Mines revealed its new 155 teraflop supercomputer dubbed “BlueM” which is designed to allow researchers to run large simulations in support of the university’s core research areas while operating on the forefront of algorithm development using a powerful hybrid system. The system will be housed at the National Center for Atmospheric Research (NCAR) in a major new collaboration between the two organizations.

Earthquakes, Tornadoes and Natural Disasters

tornadoWhizzing through 213 trillion calculations per second, newly upgraded supercomputers of NOAA’s National Weather Service are now more than twice as fast in processing sophisticated computer models to provide more accurate forecasts further out in time. Nicknamed “Tide,” the supercomputer in Reston, Va., and its Orlando-based backup named “Gyre,” are operating with 213 teraflops (TF) — up from the 90 TF with the computers that preceded them. This higher processing power allows the National Weather Service to implement an enhanced Hurricane Weather Research and Forecasting (HWRF) model.

The MET Office, the UK’s National Weather Service, relies on more than 10 million weather observations from sites around the world, a sophisticated atmospheric model and a £30 million IBM supercomputer to generate 3,000 tailored forecasts every day. Thanks to this advanced forecasting system, climate scientists were able to predict the size and path of Monday’s St. Jude’s Day storm four days before it formed.

University of Oklahoma associate professor Amy McGovern is working to revolutionize tornado and storm prediction. McGovern’s ambitious tornado modeling and simulation project seeks to explain why some storms generate tornadoes while others don’t. The research is giving birth to new techniques for identifying the likely path of twisters through both space and time.

The deadly EF5 tornado that hit Moore, Oklahoma on May 20 was unique in several ways. Not only was it one of the strongest twisters ever recorded, but forecasters were able to issue a tornado warning 36 minutes in advance, saving lives. As our own Alex Woodie reported, playing a part in that forecast was a Cray supercomputer at the National Institute for Computational Sciences (NICS). Darter, which has nearly 12,000 Intel Sandy Bridge cores and 250 teraflops of peak capacity, was used to calculate the detailed Storm-scale Ensemble Forecasts (SSEF) that regional weather forecasters–such as the National Weather Service office in Norman, Oklahoma that issued the 36-minute, life-saving warning on May 20–rely on to predict tornados and other severe weather events.

Under the sponsorship of the National Nuclear Security Administration’s Office of Defense Nuclear Nonproliferation R&D, Sandia National Laboratories and Los Alamos National Laboratory have partnered to develop a 3-D model of the Earth’s mantle and crust called SALSA3D, or Sandia-Los Alamos 3D. The purpose of this model is to assist the US Air Force and the international Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) in Vienna, Austria, more accurately locate all types of explosions.

Back in June, SGI, a leader in technical computing and Big Data, has announced its NVIDIA Tesla GPU-powered SGI Rackable servers have been deployed in the Department of Geosciences at Princeton University to drive next-generation earthquake research. The department will utilize five main open-source software packages and is leveraging NVIDIA GPUs for the SPECFEM3D ‘Sesame’ application, which simulates seismic wave propagation on regional and global scales.

Military and Defense

findata2We’ll leave the discussions about the NSA and mass surveillance for other publications, although it’s hard to imagine all of the system and software innovations that are going toward those efforts now—and did in 2013. While that topic has grabbed the mainstream ear this year, there are some noteworthy developments in national security and defense that were lost in the headlines.

Geospatial intelligence data collection methods are increasingly complex—and accordingly, the amount and quality of the data they produce are opening new opportunities for governments to exploit for military and defense purposes. GPU giant NVIDIA reached out to this growing area this year by offering up a platform for the geospatial intelligence community with its GeoInt Accelerator. The goal of the packaged offering is to provide an integrated suite of tools for geospatial intelligence analysts as well as that community’s specialty developers that are primed to take advantage of GPU speedups.

In addition to offering a number of key applications relevant to this community (from situational awareness, satellite imagery and object detection software) they’ve also pulled together a number of relevant libraries for defense contractors and integrators to use for building GPU-accelerated applications, including their own Performance Primitives, MATLAB Imaging Toolkit, CUDA FFT, Accelereyes’ ArrayFire and other contents.

Centers that take advantage of military and defense data are also growing. For instance, the Air Force Research Laboratory Supercomputing Resource Center (DSRC) has a new addition to its fleet of supers via its new SGI ICE X system called Spirit, which will be housed at the Wright-Patterson Air Force base in Dayton, Ohio. The top 20-level system, which is capable of 1.4 petaflops, will support various research, development, testing and evaluation projects, particularly on the aircraft and ship design fronts. Spirit boasts 4,608 nodes and 73,728 Xeon cores humming at 2.6 GHz, as well as 146 TB of memory and 4.6 PB of disk space.

The US Army Research Laboratory (ARL) took the wraps off a new supercomputing center in 2013—this center is set to advance the service’s war-fighting capability. Two HPC systems have been installed at the ARL Supercomputing Center at the Aberdeen Proving Grounds, which was the home of ENIAC, the world’s first general-purpose electronic computer. It goes without saying that the two iDataPlex systems at the ARL Supercomputing Center have vastly more processing capacity than ENIAC, which was installed by the Army at APG in 1946 to do ballistics calculations. Whereas the new Army’s new supercomputers have the capability to process trillions of floating-point operations per second, or teraflops, ENIAC could manage hundreds per second.

As Alex Woodie reported, “Army scientists and engineers will use the supercomputers to model and evaluate a wide range of soldier- and combat-vehicle-related materials in advance of actual manufacturing. This will accelerate product development by allowing the Army to invest the time and money for actual physical testing for only the products showing the highest promise through modeling.”

Financial Markets

finmkts2As we see each time HPC on Wall Street happens during the year, and of course throughout the news cycle, the financial services sector as a whole is one of the first adopters—and among the greatest commercial innovators when it comes to HPC technologies.

Of course, this doesn’t mean that computers are always the worthy allies markets need them to be. Recall that there were some glitches in 2013, which led us to speculate on what the future of reliability will look like going forward.

There were some striking technology developments this year for the sector, despite some of the more mainstream controversies about placing our utter faith in the “hands” of machines. For instance, London-based bank HSBC demonstrated that it may be able to save millions of dollars in computer costs by moving a portfolio pricing process from a grid of Intel Xeon processors to NVIDIA Tesla GPUs, reports Xcelerit, the company that helped the bank with its experiment by providing CUDA programming tools.

In April, Xcelerit reported on the promising experiment conducted by the Quantitative Risk and Valuation Group (QRVG) at HSBC, which reported more than $2.6 trillion in assets in 2012. The QRVG is responsible for running Credit Value Adjustment (CVA) processes every night over HSBC’s entire portfolio to compute its risk exposure, per Basel III requirements.

While we’re on the topic of financial services and the new year, make sure to take a look at what’s cooking for the 2014 HPC for Wall Street event.

Looking Ahead to 2014

We can expect to see these same general areas in next year’s summary, but the performance, capability and programmability will hopefully continue to improve, leading to more insights. Outside of these broader industry and research segments, we look forward to delivering more interesting topics that don’t fit the mold (like this year’s stories about GPUs and the mysteries of flying snakes or tracking the first dinosaur steps).

Thanks for joining us for another exciting year!

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Talk to Me: Nvidia Claims NLP Inference, Training Records

August 15, 2019

Nvidia says it’s achieved significant advances in conversation natural language processing (NLP) training and inference, enabling more complex, immediate-response interchanges between customers and chatbots. And the co Read more…

By Doug Black

Trump Administration and NIST Issue AI Standards Development Plan

August 14, 2019

Efforts to develop AI are gathering steam fast. On Monday, the White House issued a federal plan to help develop technical standards for AI following up on a mandate contained in the Administration’s AI Executive Order Read more…

By John Russell

Scientists to Tap Exascale Computing to Unlock the Mystery of our Accelerating Universe

August 14, 2019

The universe and everything in it roared to life with the Big Bang approximately 13.8 billion years ago. It has continued expanding ever since. While we have a good understanding of the early universe, its fate billions Read more…

By Rob Johnson

AWS Solution Channel

Efficiency and Cost-Optimization for HPC Workloads – AWS Batch and Amazon EC2 Spot Instances

High Performance Computing on AWS leverages the power of cloud computing and the extreme scale it offers to achieve optimal HPC price/performance. With AWS you can right size your services to meet exactly the capacity requirements you need without having to overprovision or compromise capacity. Read more…

HPE Extreme Performance Solutions

Bring the combined power of HPC and AI to your business transformation

FPGA (Field Programmable Gate Array) acceleration cards are not new, as they’ve been commercially available since 1984. Typically, the emphasis around FPGAs has centered on the fact that they’re programmable accelerators, and that they can truly offer workload specific hardware acceleration solutions without requiring custom silicon. Read more…

IBM Accelerated Insights

Cloudy with a Chance of Mainframes

[Connect with HPC users and learn new skills in the IBM Spectrum LSF User Community.]

Rapid rates of change sometimes result in unexpected bedfellows. Read more…

Argonne Supercomputer Accelerates Cancer Prediction Research

August 13, 2019

In the fight against cancer, early prediction, which drastically improves prognoses, is critical. Now, new research by a team from Northwestern University – and accelerated by supercomputing resources at Argonne Nation Read more…

By Oliver Peckham

Scientists to Tap Exascale Computing to Unlock the Mystery of our Accelerating Universe

August 14, 2019

The universe and everything in it roared to life with the Big Bang approximately 13.8 billion years ago. It has continued expanding ever since. While we have a Read more…

By Rob Johnson

AI is the Next Exascale – Rick Stevens on What that Means and Why It’s Important

August 13, 2019

Twelve years ago the Department of Energy (DOE) was just beginning to explore what an exascale computing program might look like and what it might accomplish. Today, DOE is repeating that process for AI, once again starting with science community town halls to gather input and stimulate conversation. The town hall program... Read more…

By Tiffany Trader and John Russell

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

Lenovo Drives Single-Socket Servers with AMD Epyc Rome CPUs

August 7, 2019

No summer doldrums here. As part of the AMD Epyc Rome launch event in San Francisco today, Lenovo announced two new single-socket servers, the ThinkSystem SR635 Read more…

By Doug Black

Building Diversity and Broader Engagement in the HPC Community

August 7, 2019

Increasing diversity and inclusion in HPC is a community-building effort. Representation of both issues and individuals matters - the more people see HPC in a w Read more…

By AJ Lauer

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

Upcoming NSF Cyberinfrastructure Projects to Support ‘Long-Tail’ Users, AI and Big Data

August 5, 2019

The National Science Foundation is well positioned to support national priorities, as new NSF-funded HPC systems to come online in the upcoming year promise to Read more…

By Ken Chiacchia, Pittsburgh Supercomputing Center/XSEDE

High Performance (Potato) Chips

May 5, 2006

In this article, we focus on how Procter & Gamble is using high performance computing to create some common, everyday supermarket products. Tom Lange, a 27-year veteran of the company, tells us how P&G models products, processes and production systems for the betterment of consumer package goods. Read more…

By Michael Feldman

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Cray, AMD to Extend DOE’s Exascale Frontier

May 7, 2019

Cray and AMD are coming back to Oak Ridge National Laboratory to partner on the world’s largest and most expensive supercomputer. The Department of Energy’s Read more…

By Tiffany Trader

Graphene Surprises Again, This Time for Quantum Computing

May 8, 2019

Graphene is fascinating stuff with promise for use in a seeming endless number of applications. This month researchers from the University of Vienna and Institu Read more…

By John Russell

AMD Verifies Its Largest 7nm Chip Design in Ten Hours

June 5, 2019

AMD announced last week that its engineers had successfully executed the first physical verification of its largest 7nm chip design – in just ten hours. The AMD Radeon Instinct Vega20 – which boasts 13.2 billion transistors – was tested using a TSMC-certified Calibre nmDRC software platform from Mentor. Read more…

By Oliver Peckham

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

Deep Learning Competitors Stalk Nvidia

May 14, 2019

There is no shortage of processing architectures emerging to accelerate deep learning workloads, with two more options emerging this week to challenge GPU leader Nvidia. First, Intel researchers claimed a new deep learning record for image classification on the ResNet-50 convolutional neural network. Separately, Israeli AI chip startup Hailo.ai... Read more…

By George Leopold

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Nvidia Embraces Arm, Declares Intent to Accelerate All CPU Architectures

June 17, 2019

As the Top500 list was being announced at ISC in Frankfurt today with an upgraded petascale Arm supercomputer in the top third of the list, Nvidia announced its Read more…

By Tiffany Trader

Top500 Purely Petaflops; US Maintains Performance Lead

June 17, 2019

With the kick-off of the International Supercomputing Conference (ISC) in Frankfurt this morning, the 53rd Top500 list made its debut, and this one's for petafl Read more…

By Tiffany Trader

A Behind-the-Scenes Look at the Hardware That Powered the Black Hole Image

June 24, 2019

Two months ago, the first-ever image of a black hole took the internet by storm. A team of scientists took years to produce and verify the striking image – an Read more…

By Oliver Peckham

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

Cray – and the Cray Brand – to Be Positioned at Tip of HPE’s HPC Spear

May 22, 2019

More so than with most acquisitions of this kind, HPE’s purchase of Cray for $1.3 billion, announced last week, seems to have elements of that overused, often Read more…

By Doug Black and Tiffany Trader

Chinese Company Sugon Placed on US ‘Entity List’ After Strong Showing at International Supercomputing Conference

June 26, 2019

After more than a decade of advancing its supercomputing prowess, operating the world’s most powerful supercomputer from June 2013 to June 2018, China is keep Read more…

By Tiffany Trader

In Wake of Nvidia-Mellanox: Xilinx to Acquire Solarflare

April 25, 2019

With echoes of Nvidia’s recent acquisition of Mellanox, FPGA maker Xilinx has announced a definitive agreement to acquire Solarflare Communications, provider Read more…

By Doug Black

Qualcomm Invests in RISC-V Startup SiFive

June 7, 2019

Investors are zeroing in on the open standard RISC-V instruction set architecture and the processor intellectual property being developed by a batch of high-flying chip startups. Last fall, Esperanto Technologies announced a $58 million funding round. Read more…

By George Leopold

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This