“We’re gonna need a bigger supercomputer” is what Norwegian oil and gas company Petroleum Geo-Services (PGS) must have said to Cray ahead of working with the iconic supercomputer maker to expand its seismic processing capability by a full 50 percent.
And it’s not like PGS didn’t already have a big supercomputer. The energy giant, which specializes in marine exploration, took delivery of a 4-petaflops (5.3 petaflops peak) Cray XC30 system last year, which at the time made it the largest commercial supercomputing user on record. (That title has since been usurped by another industry heavyweight Total, which purchased its 5.3 petaflops (6.7 peak) “Pangea” supercomputer from SGI earlier this year.)
Announced today, PGS’s second XC-series Cray provides 2.8 petaflops of peak FLOPS packed inside 12 liquid-cooled cabinets, bringing PGS’s total spec’d supercomputing capacity to around 8 petaflops. The second stand-alone system will be housed in the same Houston datacenter as the existing 24-cabinet machine. The addition also boosts PGS’s existing Sonexion storage infrastructure by 2.8 petabytes.
The new XC40 machine, named “Galois” after the French mathematician Évariste Galois, is powered entirely by Intel Xeon processors as is “Abel,” the larger Cray, which is named after the Norwegian mathematician, Niels Henrik Abel. Both will used for processing massive seismic datasets to satisfy customer demand for advanced Reverse Time Migration (RTM) images and Full Waveform Inversion (FWI) results. While Cray sells Xeon Phi manycore- and GPU-based systems, Cray Senior Vice President and Chief Strategy Officer Barry Bolding noted that in the case of PGS, the general-purpose x86 processors from Intel are ideal.
PGS hasn’t said whether it is entering Galois for TOP500 consideration, but currently its big brother, Abel, sits on the latest list (June 2016) at number 16 with a LINPACK score of 4 petaflops. It entered the listing one year prior at number 12. For reference, Total’s SGI ICE X supercomputer is currently ranked at number 11.
PGS started to hit the wall in terms of its existing cluster-based computing capabilities when it commenced the Triton survey in November 2013. The sophisticated seismic imaging survey is being conducted over a 10,000 square kilometer area of the Gulf of Mexico known for being particularly difficult-to-image. The survey was the most complex imaging challenge PGS had ever faced and it drove the need for supercomputing-class infrastructure with high throughput and memory, a strong interconnect, and enhanced scalability. The business goal was to reduce development times and keep production on track as workflow complexity and volume increased.
Using full-wave inversion analysis to expose what’s underneath the crust of the earth is the type of big, scalable supercomputing problem that needs high bandwidth both on the network and on the storage, says Bolding. “They are big simulations,” he adds. “The computer’s not running tens of millions of small jobs, it’s running hundreds of thousands of big jobs.”
With their first Cray, PGS went from being unable to process the Triton survey within the production deadline to being able run more, larger jobs using more complex data and algorithms. Code scalability was increased and they were able to run jobs faster with higher-quality results.
“At the end of the day, that means they are getting more value, more work done per amount of money spent,” said Bolding. “That’s the equation that these commercial companies [go by]. It’s not about peak performance, it’s not about getting on the TOP500, it’s not about the peak specification of a processor, it’s about how many jobs you can get done.
“The same is true of the weather community. They buy our systems because they can get so many forecasts done in a day and they can get them done more efficiently – there they have a time constraint not just dollars. They might even be willing to pay more dollars in the weather prediction industry to be able to get those forecasts done in the time window that they have – but in the energy sector it’s much more about dollars per piece of work.”
“[The new supercomputer] allows PGS to run larger jobs, image more complex data, while at the same time reduce lead time and get higher quality results,” said PGS in a prepared statement. “Our customers can take advantage of cutting-edge imaging algorithms such as PGS Reverse Time Migration, Separated Wavefield Imaging (SWIM), and Wave Equation Reflectivity lnversion while at the same time reducing project duration in comparison to our competitors.”