BP Brings Petascale Computing to Oil and Gas Industry

By Michael Feldman

December 12, 2012

British multinational BP revealed it is building a new datacenter in Houston to house a 2-petaflop supercomputer. When installed in 2013, it will likely be the most powerful system deployed by a commercial entity, at least of the ones that have been publicly revealed. The upcoming petaflopper will support the company’s oil and gas exploration efforts and other research objectives.

According to the press release, BP’s existing datacenter in Houston has topped out in power and cooling capacity, so a new high performance computing facility was needed in order to support the company’s expanding HPC footprint. The new one will also be located in Houston and is scheduled to open sometime around the middle of next year.

It will house compute and storage systems devoted to processing BP’s voluminous set of seismic data collected around the world. It will also support “rock physics,” which will enable company scientists to produce images of rock structures deep underground – all of this to help BP locate and exploit new oil and gas resources.

The future center and supercomputer will put BP’s HPC infrastructure on par with that of national labs. At 110,000 square feet, the new facility will actually be larger than the recent 95,000 square-foot datacenter built for NCSA’s 11.5-petaflop Blue Waters supercomputer. To go along with the 2 petaflops of peak number-crunching capability, the future BP machine will also be outfitted with 536 terabytes of memory and 23.5 petabytes of external disk storage.

The upcoming BP super will apparently be getting all its FLOPs from CPUs, about 67,000 of them according to the official announcement. In an email interview with HPCwire, Keith Gray, BP’s HPC center manager, said they are not quite ready to make the jump to heterogeneous computing. “We continue to test accelerators,” wrote Gray, “but have not built a strong business case for our complete application base.”

“We must create a competitive environment to maximize the capabilities we will deliver,” he continued. “Our researchers want to test their ideas on real problems at scale. They want to increase the resolution and complexity. We need to be flexible and take advantage of what the market can deliver.”

The existing HPC setup at BP provides an aggregate peak performance of more than 1.2 petaflops. It consists of multiple clusters based on a variety of Intel Xeon-powered clusters, including of 2,912 HP SL230 nodes (8-core 2.6 GHz Sandy Bridge CPUs), 1,920 Dell C6100 nodes (6-core 2.6 GHz Westmere CPUs), and 50 HP DL580 nodes (2.3 GHz Westmere EX CPUs). The core network in their current datacenter is Ethernet and is provided is by Arista, while their storage systems have been gathered from various vendors, including Panasas, IBM, and DataDirect Networks.

The largest MPI applications used at BP can scale to more than 30,000 cores, so the new system will give them plenty of headroom for expansion. It will also allow multiple large jobs to be processed in parallel. “Projects that currently run overnight can now be run twice a day – letting us try more ideas,” explained Gray. “If a project takes six months, we might choose to defer it. If we can complete in three months, we may choose to proceed.”

BP says its processing needs have increased 10,000-fold since 1999. Seismic imaging that would have taken four years of computing time a decade ago can now be accomplished in an hour. The increase in processing power over this period has transformed oil and gas exploration, allowing major new finds at a time when many were predicting that most of world’s reserves had been located.

With oil pushing $100 per barrel, there is plenty of incentive for these companies to be investing in technologies that can uncover new reserves. For its part, BP has doubled HPC spending over the last few years and intends to keep that investment on an upward slope. The company is planning to test 15 new oil and gas sites over the next three years, and it expects that at least some of its 35 exploration wells will each yield an equivalent of a quarter billion barrels of oil.

BP claims that its 2-petaflop system will be the largest such machine employed for commercial purposes. That may or may not be the case, since not all commercial supercomputing deployments are made public, especially in the financial services realm and the oil and gas industry. These just happen to be the two industries that have the wherewithal and the monetary incentives to buy top-of-the-line supercomputers. But, for competitive reasons, not all of them want to reveal the technology they are using to drive revenue.

In the current TOP500 rankings, the fastest Linpack machine that was obtained without the help of government funds is a 461 teraflop cluster for a non-specified geosciences firm. It sits at number 44 on the November 2012 list. This one from BP will be four times as powerful and would land it in the top 10 today.

While petaflop-plus computing is not commonplace yet, even in the government sector, BP’s plans are yet another indication that the petascale era is in full swing. And although there are only about 50 such machines in the world today, with the advent of teraflop accelerators and ever more powerful CPUs, such computing should become much more prevalent in the commercial arena and elsewhere over the next few years.

[The original version of this article erroneously referred to the Blue Waters supercomputer employing Xeon Phi processors. As of today, Stampede is the only petascale Phi-powered system. The original text also mistakenly talked about oil at $100 per gallon oil, rather than $100 per barrel. We regret the errors — Editor]

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

SC Bids Farewell to Denver, Heads to Dallas for 30th

November 17, 2017

After a jam-packed four-day expo and intensive six-day technical program, SC17 has wrapped up another successful event that brought together nearly 13,000 visitors to the Colorado Convention Center in Denver for the larg Read more…

By Tiffany Trader

SC17 Keynote – HPC Powers SKA Efforts to Peer Deep into the Cosmos

November 17, 2017

This week’s SC17 keynote – Life, the Universe and Computing: The Story of the SKA Telescope – was a powerful pitch for the potential of Big Science projects that also showcased the foundational role of high performance computing in modern science. It was also visually stunning. Read more…

By John Russell

How Cities Use HPC at the Edge to Get Smarter

November 17, 2017

Cities are sensoring up, collecting vast troves of data that they’re running through predictive models and using the insights to solve problems that, in some cases, city managers didn’t even know existed. Speaking Read more…

By Doug Black

HPE Extreme Performance Solutions

Harness Scalable Petabyte Storage with HPE Apollo 4510 and HPE StoreEver

As a growing number of connected devices challenges IT departments to rapidly collect, manage, and store troves of data, organizations must adopt a new generation of IT to help them operate quickly and intelligently. Read more…

SC17 Student Cluster Competition Configurations: Fewer Nodes, Way More Accelerators

November 16, 2017

The final configurations for each of the SC17 “Donnybrook in Denver” Student Cluster Competition have been released. Fortunately, each team received their equipment shipments on time and undamaged, so the teams are r Read more…

By Dan Olds

SC Bids Farewell to Denver, Heads to Dallas for 30th

November 17, 2017

After a jam-packed four-day expo and intensive six-day technical program, SC17 has wrapped up another successful event that brought together nearly 13,000 visit Read more…

By Tiffany Trader

SC17 Keynote – HPC Powers SKA Efforts to Peer Deep into the Cosmos

November 17, 2017

This week’s SC17 keynote – Life, the Universe and Computing: The Story of the SKA Telescope – was a powerful pitch for the potential of Big Science projects that also showcased the foundational role of high performance computing in modern science. It was also visually stunning. Read more…

By John Russell

How Cities Use HPC at the Edge to Get Smarter

November 17, 2017

Cities are sensoring up, collecting vast troves of data that they’re running through predictive models and using the insights to solve problems that, in some Read more…

By Doug Black

Student Cluster LINPACK Record Shattered! More LINs Packed Than Ever before!

November 16, 2017

Nanyang Technological University, the pride of Singapore, utterly destroyed the Student Cluster Competition LINPACK record by posting a score of 51.77 TFlop/s a Read more…

By Dan Olds

Hyperion Market Update: ‘Decent’ Growth Led by HPE; AI Transparency a Risk Issue

November 15, 2017

The HPC market update from Hyperion Research (formerly IDC) at the annual SC conference is a business and social “must,” and this year’s presentation at S Read more…

By Doug Black

Nvidia Focuses Its Cloud Containers on HPC Applications

November 14, 2017

Having migrated its top-of-the-line datacenter GPU to the largest cloud vendors, Nvidia is touting its Volta architecture for a range of scientific computing ta Read more…

By George Leopold

HPE Launches ARM-based Apollo System for HPC, AI

November 14, 2017

HPE doubled down on its memory-driven computing vision while expanding its processor portfolio with the announcement yesterday of the company’s first ARM-base Read more…

By Doug Black

OpenACC Shines in Global Climate/Weather Codes

November 14, 2017

OpenACC, the directive-based parallel programming model used mostly for porting codes to GPUs for use on heterogeneous systems, came to SC17 touting impressive Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Leading Solution Providers

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This