The Top Supercomputing Led Discoveries of 2013

By Nicole Hemsoth

January 2, 2014

Oil and Gas, Renewable Energy

natureOil and gas supercomputers are finding their way into the upper ranks of the Top500 with many of the major companies retrofitting their existing systems or cutting the ribbon on new centers.

Back in October, BP announced that it has opened a new facility in Houston, Texas, that is designed house the “world’s largest supercomputer for commercial research.” The Center for High-Performance Computing is part of BP’s five-year, $100 million investment in computing.

As we reported this year, BP’s newest supercomputer was built by HP and Intel. With 2.2 petaflops of data-crunching potential, the new supercomputer has almost twice as much computing power as BP’s previous machine. The new supercomputer also comes with 1,000 TB of total memory and 23.5 petabytes of disk space (which is equivalent of over 40,000 average laptop computers).

Last December, HPCwire learned that BP planned to derive its FLOPs from a CPU-only strategy. The new system would employ about 67,000 CPUs, but no GPUs or Phis. At the time, Keith Gray, BP’s HPC center manager, told HPCwire that the British firm wasn’t ready to make the leap to heterogeneous computing. “We continue to test accelerators,” he shared in an email, “but have not built a strong business case for our complete application base.”

Also in oil and gas news this year was a new system from SGI tweaked from its ICE X HPC system for French oil and gas giant, Total. The approximately $78 million ICE X based system will clock in about ninth place on the Top 500 list as it stands now, ringing in at around 2.3 petaflops, at least in terms of its peak Linpack calculations. SGI expects it to pull the title of top commercial system this year, which is probably not an unreasonable assumption given its predicted performance across its 110,592 Xeon E5-2670 cores and 442 TB of memory that is split on this distributed-memory system.

On the specs front, SGI points to the data management capabilities, consisting of 7 PB of storage, including its native InfiniteStorage disk arrays (17,000 of them to be exact) and their DMF tiered storage virtualization backed by integrated Lustre.

In a partnership with Sandia National Lab, GE Global Research, the technology development arm of the General Electric Company, announced research that could significantly impact the design of future wind turbine blades. Utilizing the power of high-performance computing (HPC) to perform complex calculations, GE engineers have overcome previous design constraints, allowing them to begin exploring ways to design reengineered wind blades that are low-noise and more prolific power-producers.

Back in May, the Colorado School of Mines revealed its new 155 teraflop supercomputer dubbed “BlueM” which is designed to allow researchers to run large simulations in support of the university’s core research areas while operating on the forefront of algorithm development using a powerful hybrid system. The system will be housed at the National Center for Atmospheric Research (NCAR) in a major new collaboration between the two organizations.

Earthquakes, Tornadoes and Natural Disasters

tornadoWhizzing through 213 trillion calculations per second, newly upgraded supercomputers of NOAA’s National Weather Service are now more than twice as fast in processing sophisticated computer models to provide more accurate forecasts further out in time. Nicknamed “Tide,” the supercomputer in Reston, Va., and its Orlando-based backup named “Gyre,” are operating with 213 teraflops (TF) — up from the 90 TF with the computers that preceded them. This higher processing power allows the National Weather Service to implement an enhanced Hurricane Weather Research and Forecasting (HWRF) model.

The MET Office, the UK’s National Weather Service, relies on more than 10 million weather observations from sites around the world, a sophisticated atmospheric model and a £30 million IBM supercomputer to generate 3,000 tailored forecasts every day. Thanks to this advanced forecasting system, climate scientists were able to predict the size and path of Monday’s St. Jude’s Day storm four days before it formed.

University of Oklahoma associate professor Amy McGovern is working to revolutionize tornado and storm prediction. McGovern’s ambitious tornado modeling and simulation project seeks to explain why some storms generate tornadoes while others don’t. The research is giving birth to new techniques for identifying the likely path of twisters through both space and time.

The deadly EF5 tornado that hit Moore, Oklahoma on May 20 was unique in several ways. Not only was it one of the strongest twisters ever recorded, but forecasters were able to issue a tornado warning 36 minutes in advance, saving lives. As our own Alex Woodie reported, playing a part in that forecast was a Cray supercomputer at the National Institute for Computational Sciences (NICS). Darter, which has nearly 12,000 Intel Sandy Bridge cores and 250 teraflops of peak capacity, was used to calculate the detailed Storm-scale Ensemble Forecasts (SSEF) that regional weather forecasters–such as the National Weather Service office in Norman, Oklahoma that issued the 36-minute, life-saving warning on May 20–rely on to predict tornados and other severe weather events.

Under the sponsorship of the National Nuclear Security Administration’s Office of Defense Nuclear Nonproliferation R&D, Sandia National Laboratories and Los Alamos National Laboratory have partnered to develop a 3-D model of the Earth’s mantle and crust called SALSA3D, or Sandia-Los Alamos 3D. The purpose of this model is to assist the US Air Force and the international Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) in Vienna, Austria, more accurately locate all types of explosions.

Back in June, SGI, a leader in technical computing and Big Data, has announced its NVIDIA Tesla GPU-powered SGI Rackable servers have been deployed in the Department of Geosciences at Princeton University to drive next-generation earthquake research. The department will utilize five main open-source software packages and is leveraging NVIDIA GPUs for the SPECFEM3D ‘Sesame’ application, which simulates seismic wave propagation on regional and global scales.

Military and Defense

findata2We’ll leave the discussions about the NSA and mass surveillance for other publications, although it’s hard to imagine all of the system and software innovations that are going toward those efforts now—and did in 2013. While that topic has grabbed the mainstream ear this year, there are some noteworthy developments in national security and defense that were lost in the headlines.

Geospatial intelligence data collection methods are increasingly complex—and accordingly, the amount and quality of the data they produce are opening new opportunities for governments to exploit for military and defense purposes. GPU giant NVIDIA reached out to this growing area this year by offering up a platform for the geospatial intelligence community with its GeoInt Accelerator. The goal of the packaged offering is to provide an integrated suite of tools for geospatial intelligence analysts as well as that community’s specialty developers that are primed to take advantage of GPU speedups.

In addition to offering a number of key applications relevant to this community (from situational awareness, satellite imagery and object detection software) they’ve also pulled together a number of relevant libraries for defense contractors and integrators to use for building GPU-accelerated applications, including their own Performance Primitives, MATLAB Imaging Toolkit, CUDA FFT, Accelereyes’ ArrayFire and other contents.

Centers that take advantage of military and defense data are also growing. For instance, the Air Force Research Laboratory Supercomputing Resource Center (DSRC) has a new addition to its fleet of supers via its new SGI ICE X system called Spirit, which will be housed at the Wright-Patterson Air Force base in Dayton, Ohio. The top 20-level system, which is capable of 1.4 petaflops, will support various research, development, testing and evaluation projects, particularly on the aircraft and ship design fronts. Spirit boasts 4,608 nodes and 73,728 Xeon cores humming at 2.6 GHz, as well as 146 TB of memory and 4.6 PB of disk space.

The US Army Research Laboratory (ARL) took the wraps off a new supercomputing center in 2013—this center is set to advance the service’s war-fighting capability. Two HPC systems have been installed at the ARL Supercomputing Center at the Aberdeen Proving Grounds, which was the home of ENIAC, the world’s first general-purpose electronic computer. It goes without saying that the two iDataPlex systems at the ARL Supercomputing Center have vastly more processing capacity than ENIAC, which was installed by the Army at APG in 1946 to do ballistics calculations. Whereas the new Army’s new supercomputers have the capability to process trillions of floating-point operations per second, or teraflops, ENIAC could manage hundreds per second.

As Alex Woodie reported, “Army scientists and engineers will use the supercomputers to model and evaluate a wide range of soldier- and combat-vehicle-related materials in advance of actual manufacturing. This will accelerate product development by allowing the Army to invest the time and money for actual physical testing for only the products showing the highest promise through modeling.”

Financial Markets

finmkts2As we see each time HPC on Wall Street happens during the year, and of course throughout the news cycle, the financial services sector as a whole is one of the first adopters—and among the greatest commercial innovators when it comes to HPC technologies.

Of course, this doesn’t mean that computers are always the worthy allies markets need them to be. Recall that there were some glitches in 2013, which led us to speculate on what the future of reliability will look like going forward.

There were some striking technology developments this year for the sector, despite some of the more mainstream controversies about placing our utter faith in the “hands” of machines. For instance, London-based bank HSBC demonstrated that it may be able to save millions of dollars in computer costs by moving a portfolio pricing process from a grid of Intel Xeon processors to NVIDIA Tesla GPUs, reports Xcelerit, the company that helped the bank with its experiment by providing CUDA programming tools.

In April, Xcelerit reported on the promising experiment conducted by the Quantitative Risk and Valuation Group (QRVG) at HSBC, which reported more than $2.6 trillion in assets in 2012. The QRVG is responsible for running Credit Value Adjustment (CVA) processes every night over HSBC’s entire portfolio to compute its risk exposure, per Basel III requirements.

While we’re on the topic of financial services and the new year, make sure to take a look at what’s cooking for the 2014 HPC for Wall Street event.

Looking Ahead to 2014

We can expect to see these same general areas in next year’s summary, but the performance, capability and programmability will hopefully continue to improve, leading to more insights. Outside of these broader industry and research segments, we look forward to delivering more interesting topics that don’t fit the mold (like this year’s stories about GPUs and the mysteries of flying snakes or tracking the first dinosaur steps).

Thanks for joining us for another exciting year!

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

UK to Launch Six Major HPC Centers

March 27, 2017

Six high performance computing centers will be formally launched in the U.K. later this week intended to provide wider access to HPC resources to U.K. Read more…

By John Russell

AI in the News: Rao in at Intel, Ng out at Baidu, Nvidia on at Tencent Cloud

March 26, 2017

Just as AI has become the leitmotif of the advanced scale computing market, infusing much of the conversation about HPC in commercial and industrial spheres, it also is impacting high-level management changes in the industry. Read more…

By Doug Black

Scalable Informatics Ceases Operations

March 23, 2017

On the same day we reported on the uncertain future for HPC compiler company PathScale, we are sad to learn that another HPC vendor, Scalable Informatics, is closing its doors. Read more…

By Tiffany Trader

‘Strategies in Biomedical Data Science’ Advances IT-Research Synergies

March 23, 2017

“Strategies in Biomedical Data Science: Driving Force for Innovation” by Jay A. Etchings is both an introductory text and a field guide for anyone working with biomedical data. Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

Quants Achieving Maximum Compute Power without the Learning Curve

The financial services industry is a fast-paced and data-intensive environment, and financial firms are realizing that they must modernize their IT infrastructures and invest in high performance computing (HPC) tools in order to survive. Read more…

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. Read more…

By Tiffany Trader

Google Launches New Machine Learning Journal

March 22, 2017

On Monday, Google announced plans to launch a new peer review journal and “ecosystem” Read more…

By John Russell

Swiss Researchers Peer Inside Chips with Improved X-Ray Imaging

March 22, 2017

Peering inside semiconductor chips using x-ray imaging isn’t new, but the technique hasn’t been especially good or easy to accomplish. Read more…

By John Russell

LANL Simulation Shows Massive Black Holes Break ‘Speed Limit’

March 21, 2017

A new computer simulation based on codes developed at Los Alamos National Laboratory (LANL) is shedding light on how supermassive black holes could have formed in the early universe contrary to most prior models which impose a limit on how fast these massive ‘objects’ can form. Read more…

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. Read more…

By Tiffany Trader

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the campaign. Read more…

By John Russell

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

US Supercomputing Leaders Tackle the China Question

March 15, 2017

Joint DOE-NSA report responds to the increased global pressures impacting the competitiveness of U.S. supercomputing. Read more…

By Tiffany Trader

New Japanese Supercomputing Project Targets Exascale

March 14, 2017

Another Japanese supercomputing project was revealed this week, this one from emerging supercomputer maker, ExaScaler Inc., and Keio University. The partners are working on an original supercomputer design with exascale aspirations. Read more…

By Tiffany Trader

Nvidia Debuts HGX-1 for Cloud; Announces Fujitsu AI Deal

March 9, 2017

On Monday Nvidia announced a major deal with Fujitsu to help build an AI supercomputer for RIKEN using 24 DGX-1 servers. Read more…

By John Russell

HPC4Mfg Advances State-of-the-Art for American Manufacturing

March 9, 2017

Last Friday (March 3, 2017), the High Performance Computing for Manufacturing (HPC4Mfg) program held an industry engagement day workshop in San Diego, bringing together members of the US manufacturing community, national laboratories and universities to discuss the role of high-performance computing as an innovation engine for American manufacturing. Read more…

By Tiffany Trader

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the campaign. Read more…

By John Russell

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Read more…

By John Russell

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

Leading Solution Providers

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. Read more…

By Tiffany Trader

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

US Supercomputing Leaders Tackle the China Question

March 15, 2017

Joint DOE-NSA report responds to the increased global pressures impacting the competitiveness of U.S. supercomputing. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This