The Top Supercomputing Led Discoveries of 2013

By Nicole Hemsoth

January 2, 2014

Oil and Gas, Renewable Energy

natureOil and gas supercomputers are finding their way into the upper ranks of the Top500 with many of the major companies retrofitting their existing systems or cutting the ribbon on new centers.

Back in October, BP announced that it has opened a new facility in Houston, Texas, that is designed house the “world’s largest supercomputer for commercial research.” The Center for High-Performance Computing is part of BP’s five-year, $100 million investment in computing.

As we reported this year, BP’s newest supercomputer was built by HP and Intel. With 2.2 petaflops of data-crunching potential, the new supercomputer has almost twice as much computing power as BP’s previous machine. The new supercomputer also comes with 1,000 TB of total memory and 23.5 petabytes of disk space (which is equivalent of over 40,000 average laptop computers).

Last December, HPCwire learned that BP planned to derive its FLOPs from a CPU-only strategy. The new system would employ about 67,000 CPUs, but no GPUs or Phis. At the time, Keith Gray, BP’s HPC center manager, told HPCwire that the British firm wasn’t ready to make the leap to heterogeneous computing. “We continue to test accelerators,” he shared in an email, “but have not built a strong business case for our complete application base.”

Also in oil and gas news this year was a new system from SGI tweaked from its ICE X HPC system for French oil and gas giant, Total. The approximately $78 million ICE X based system will clock in about ninth place on the Top 500 list as it stands now, ringing in at around 2.3 petaflops, at least in terms of its peak Linpack calculations. SGI expects it to pull the title of top commercial system this year, which is probably not an unreasonable assumption given its predicted performance across its 110,592 Xeon E5-2670 cores and 442 TB of memory that is split on this distributed-memory system.

On the specs front, SGI points to the data management capabilities, consisting of 7 PB of storage, including its native InfiniteStorage disk arrays (17,000 of them to be exact) and their DMF tiered storage virtualization backed by integrated Lustre.

In a partnership with Sandia National Lab, GE Global Research, the technology development arm of the General Electric Company, announced research that could significantly impact the design of future wind turbine blades. Utilizing the power of high-performance computing (HPC) to perform complex calculations, GE engineers have overcome previous design constraints, allowing them to begin exploring ways to design reengineered wind blades that are low-noise and more prolific power-producers.

Back in May, the Colorado School of Mines revealed its new 155 teraflop supercomputer dubbed “BlueM” which is designed to allow researchers to run large simulations in support of the university’s core research areas while operating on the forefront of algorithm development using a powerful hybrid system. The system will be housed at the National Center for Atmospheric Research (NCAR) in a major new collaboration between the two organizations.

Earthquakes, Tornadoes and Natural Disasters

tornadoWhizzing through 213 trillion calculations per second, newly upgraded supercomputers of NOAA’s National Weather Service are now more than twice as fast in processing sophisticated computer models to provide more accurate forecasts further out in time. Nicknamed “Tide,” the supercomputer in Reston, Va., and its Orlando-based backup named “Gyre,” are operating with 213 teraflops (TF) — up from the 90 TF with the computers that preceded them. This higher processing power allows the National Weather Service to implement an enhanced Hurricane Weather Research and Forecasting (HWRF) model.

The MET Office, the UK’s National Weather Service, relies on more than 10 million weather observations from sites around the world, a sophisticated atmospheric model and a £30 million IBM supercomputer to generate 3,000 tailored forecasts every day. Thanks to this advanced forecasting system, climate scientists were able to predict the size and path of Monday’s St. Jude’s Day storm four days before it formed.

University of Oklahoma associate professor Amy McGovern is working to revolutionize tornado and storm prediction. McGovern’s ambitious tornado modeling and simulation project seeks to explain why some storms generate tornadoes while others don’t. The research is giving birth to new techniques for identifying the likely path of twisters through both space and time.

The deadly EF5 tornado that hit Moore, Oklahoma on May 20 was unique in several ways. Not only was it one of the strongest twisters ever recorded, but forecasters were able to issue a tornado warning 36 minutes in advance, saving lives. As our own Alex Woodie reported, playing a part in that forecast was a Cray supercomputer at the National Institute for Computational Sciences (NICS). Darter, which has nearly 12,000 Intel Sandy Bridge cores and 250 teraflops of peak capacity, was used to calculate the detailed Storm-scale Ensemble Forecasts (SSEF) that regional weather forecasters–such as the National Weather Service office in Norman, Oklahoma that issued the 36-minute, life-saving warning on May 20–rely on to predict tornados and other severe weather events.

Under the sponsorship of the National Nuclear Security Administration’s Office of Defense Nuclear Nonproliferation R&D, Sandia National Laboratories and Los Alamos National Laboratory have partnered to develop a 3-D model of the Earth’s mantle and crust called SALSA3D, or Sandia-Los Alamos 3D. The purpose of this model is to assist the US Air Force and the international Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) in Vienna, Austria, more accurately locate all types of explosions.

Back in June, SGI, a leader in technical computing and Big Data, has announced its NVIDIA Tesla GPU-powered SGI Rackable servers have been deployed in the Department of Geosciences at Princeton University to drive next-generation earthquake research. The department will utilize five main open-source software packages and is leveraging NVIDIA GPUs for the SPECFEM3D ‘Sesame’ application, which simulates seismic wave propagation on regional and global scales.

Military and Defense

findata2We’ll leave the discussions about the NSA and mass surveillance for other publications, although it’s hard to imagine all of the system and software innovations that are going toward those efforts now—and did in 2013. While that topic has grabbed the mainstream ear this year, there are some noteworthy developments in national security and defense that were lost in the headlines.

Geospatial intelligence data collection methods are increasingly complex—and accordingly, the amount and quality of the data they produce are opening new opportunities for governments to exploit for military and defense purposes. GPU giant NVIDIA reached out to this growing area this year by offering up a platform for the geospatial intelligence community with its GeoInt Accelerator. The goal of the packaged offering is to provide an integrated suite of tools for geospatial intelligence analysts as well as that community’s specialty developers that are primed to take advantage of GPU speedups.

In addition to offering a number of key applications relevant to this community (from situational awareness, satellite imagery and object detection software) they’ve also pulled together a number of relevant libraries for defense contractors and integrators to use for building GPU-accelerated applications, including their own Performance Primitives, MATLAB Imaging Toolkit, CUDA FFT, Accelereyes’ ArrayFire and other contents.

Centers that take advantage of military and defense data are also growing. For instance, the Air Force Research Laboratory Supercomputing Resource Center (DSRC) has a new addition to its fleet of supers via its new SGI ICE X system called Spirit, which will be housed at the Wright-Patterson Air Force base in Dayton, Ohio. The top 20-level system, which is capable of 1.4 petaflops, will support various research, development, testing and evaluation projects, particularly on the aircraft and ship design fronts. Spirit boasts 4,608 nodes and 73,728 Xeon cores humming at 2.6 GHz, as well as 146 TB of memory and 4.6 PB of disk space.

The US Army Research Laboratory (ARL) took the wraps off a new supercomputing center in 2013—this center is set to advance the service’s war-fighting capability. Two HPC systems have been installed at the ARL Supercomputing Center at the Aberdeen Proving Grounds, which was the home of ENIAC, the world’s first general-purpose electronic computer. It goes without saying that the two iDataPlex systems at the ARL Supercomputing Center have vastly more processing capacity than ENIAC, which was installed by the Army at APG in 1946 to do ballistics calculations. Whereas the new Army’s new supercomputers have the capability to process trillions of floating-point operations per second, or teraflops, ENIAC could manage hundreds per second.

As Alex Woodie reported, “Army scientists and engineers will use the supercomputers to model and evaluate a wide range of soldier- and combat-vehicle-related materials in advance of actual manufacturing. This will accelerate product development by allowing the Army to invest the time and money for actual physical testing for only the products showing the highest promise through modeling.”

Financial Markets

finmkts2As we see each time HPC on Wall Street happens during the year, and of course throughout the news cycle, the financial services sector as a whole is one of the first adopters—and among the greatest commercial innovators when it comes to HPC technologies.

Of course, this doesn’t mean that computers are always the worthy allies markets need them to be. Recall that there were some glitches in 2013, which led us to speculate on what the future of reliability will look like going forward.

There were some striking technology developments this year for the sector, despite some of the more mainstream controversies about placing our utter faith in the “hands” of machines. For instance, London-based bank HSBC demonstrated that it may be able to save millions of dollars in computer costs by moving a portfolio pricing process from a grid of Intel Xeon processors to NVIDIA Tesla GPUs, reports Xcelerit, the company that helped the bank with its experiment by providing CUDA programming tools.

In April, Xcelerit reported on the promising experiment conducted by the Quantitative Risk and Valuation Group (QRVG) at HSBC, which reported more than $2.6 trillion in assets in 2012. The QRVG is responsible for running Credit Value Adjustment (CVA) processes every night over HSBC’s entire portfolio to compute its risk exposure, per Basel III requirements.

While we’re on the topic of financial services and the new year, make sure to take a look at what’s cooking for the 2014 HPC for Wall Street event.

Looking Ahead to 2014

We can expect to see these same general areas in next year’s summary, but the performance, capability and programmability will hopefully continue to improve, leading to more insights. Outside of these broader industry and research segments, we look forward to delivering more interesting topics that don’t fit the mold (like this year’s stories about GPUs and the mysteries of flying snakes or tracking the first dinosaur steps).

Thanks for joining us for another exciting year!

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

GDPR’s Impact on Scientific Research Uncertain

May 24, 2018

Amid the angst over preparations—or lack thereof—for new European Union data protections entering into force at week’s end is the equally worrisome issue of the rules’ impact on scientific research. Among the Read more…

By George Leopold

Intel Pledges First Commercial Nervana Product ‘Spring Crest’ in 2019

May 24, 2018

At its AI developer conference in San Francisco yesterday, Intel embraced a holistic approach to AI and showed off a broad AI portfolio that includes Xeon processors, Movidius technologies, FPGAs and Intel’s Nervana Neural Network Processors (NNPs), based on the technology it acquired in 2016. Read more…

By Tiffany Trader

Pattern Computer – Startup Claims Breakthrough in ‘Pattern Discovery’ Technology

May 23, 2018

If it weren’t for the heavy-hitter technology team behind start-up Pattern Computer, which emerged from stealth today in a live-streamed event from San Francisco, one would be tempted to dismiss its claims of inventing Read more…

By John Russell

HPE Extreme Performance Solutions

HPC and AI Convergence is Accelerating New Levels of Intelligence

Data analytics is the most valuable tool in the digital marketplace – so much so that organizations are employing high performance computing (HPC) capabilities to rapidly collect, share, and analyze endless streams of data. Read more…

IBM Accelerated Insights

Mastering the Big Data Challenge in Cognitive Healthcare

Patrick Chain, genomics researcher at Los Alamos National Laboratory, posed a question in a recent blog: What if a nurse could swipe a patient’s saliva and run a quick genetic test to determine if the patient’s sore throat was caused by a cold virus or a bacterial infection? Read more…

Silicon Startup Raises ‘Prodigy’ for Hyperscale/AI Workloads

May 23, 2018

There's another silicon startup coming onto the HPC/hyperscale scene with some intriguing and bold claims. Silicon Valley-based Tachyum Inc., which has been emerging from stealth over the last year and a half, is unveili Read more…

By Tiffany Trader

Intel Pledges First Commercial Nervana Product ‘Spring Crest’ in 2019

May 24, 2018

At its AI developer conference in San Francisco yesterday, Intel embraced a holistic approach to AI and showed off a broad AI portfolio that includes Xeon processors, Movidius technologies, FPGAs and Intel’s Nervana Neural Network Processors (NNPs), based on the technology it acquired in 2016. Read more…

By Tiffany Trader

Pattern Computer – Startup Claims Breakthrough in ‘Pattern Discovery’ Technology

May 23, 2018

If it weren’t for the heavy-hitter technology team behind start-up Pattern Computer, which emerged from stealth today in a live-streamed event from San Franci Read more…

By John Russell

Silicon Startup Raises ‘Prodigy’ for Hyperscale/AI Workloads

May 23, 2018

There's another silicon startup coming onto the HPC/hyperscale scene with some intriguing and bold claims. Silicon Valley-based Tachyum Inc., which has been eme Read more…

By Tiffany Trader

Japan Meteorological Agency Takes Delivery of Pair of Crays

May 21, 2018

Cray has supplied two identical Cray XC50 supercomputers to the Japan Meteorological Agency (JMA) in northwestern Tokyo. Boasting more than 18 petaflops combine Read more…

By Tiffany Trader

ASC18: Final Results Revealed & Wrapped Up

May 17, 2018

It was an exciting week at ASC18 in Nanyang, China. The student teams braved extreme heat, extremely difficult applications, and extreme competition in order to cross the cluster competition finish line. The gala awards ceremony took place on Wednesday. The auditorium was packed with student teams, various dignitaries, the media, and other interested parties. So what happened? Read more…

By Dan Olds

Spring Meetings Underscore Quantum Computing’s Rise

May 17, 2018

The month of April 2018 saw four very important and interesting meetings to discuss the state of quantum computing technologies, their potential impacts, and th Read more…

By Alex R. Larzelere

Quantum Network Hub Opens in Japan

May 17, 2018

Following on the launch of its Q Commercial quantum network last December with 12 industrial and academic partners, the official Japanese hub at Keio University is now open to facilitate the exploration of quantum applications important to science and business. The news comes a week after IBM announced that North Carolina State University was the first U.S. university to join its Q Network. Read more…

By Tiffany Trader

Democratizing HPC: OSC Releases Version 1.3 of OnDemand

May 16, 2018

Making HPC resources readily available and easier to use for scientists who may have less HPC expertise is an ongoing challenge. Open OnDemand is a project by t Read more…

By John Russell

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17

Altair

AMD @ SC17

AMD

ASRock Rack @ SC17

ASRock Rack

CEJN @ SC17

CEJN

DDN Storage @ SC17

DDN Storage

Huawei @ SC17

Huawei

IBM @ SC17

IBM

IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17

Intel

Lenovo @ SC17

Lenovo

Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17

Microsoft

Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17

Supericro

Tyan @ SC17

Tyan

Univa @ SC17

Univa

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

CFO Steps down in Executive Shuffle at Supermicro

January 31, 2018

Supermicro yesterday announced senior management shuffling including prominent departures, the completion of an audit linked to its delayed Nasdaq filings, and Read more…

By John Russell

HPE Wins $57 Million DoD Supercomputing Contract

February 20, 2018

Hewlett Packard Enterprise (HPE) today revealed details of its massive $57 million HPC contract with the U.S. Department of Defense (DoD). The deal calls for HP Read more…

By Tiffany Trader

Deep Learning Portends ‘Sea Change’ for Oil and Gas Sector

February 1, 2018

The billowing compute and data demands that spurred the oil and gas industry to be the largest commercial users of high-performance computing are now propelling Read more…

By Tiffany Trader

Nvidia Ups Hardware Game with 16-GPU DGX-2 Server and 18-Port NVSwitch

March 27, 2018

Nvidia unveiled a raft of new products from its annual technology conference in San Jose today, and despite not offering up a new chip architecture, there were still a few surprises in store for HPC hardware aficionados. Read more…

By Tiffany Trader

Hennessy & Patterson: A New Golden Age for Computer Architecture

April 17, 2018

On Monday June 4, 2018, 2017 A.M. Turing Award Winners John L. Hennessy and David A. Patterson will deliver the Turing Lecture at the 45th International Sympo Read more…

By Staff

HPC and AI – Two Communities Same Future

January 25, 2018

According to Al Gara (Intel Fellow, Data Center Group), high performance computing and artificial intelligence will increasingly intertwine as we transition to Read more…

By Rob Farber

Part One: Deep Dive into 2018 Trends in Life Sciences HPC

March 1, 2018

Life sciences is an interesting lens through which to see HPC. It is perhaps not an obvious choice, given life sciences’ relative newness as a heavy user of H Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This