Linux Clusters Target Oil & Gas Applications

By By S. Julio Friedmann

August 25, 2006

As an industry, hydrocarbon exploration and production operates in an increasingly challenging environment. The new challenges include more than high risk and high capital commitments, or declining fields and complex operations. Unconventional plays have become conventional, with fractured and/or tight porosity systems becoming commonplace. New environmental challenges require sophisticated and constrained operations. In this evolving regulatory, economic, and political environment, it is not enough to be creative, aggressive and technically adroit. One also wants to be smart.

The good news is that smart is a lot cheaper than it used to be. Specifically, high performance computers (HPCs) are a lot less expensive than they used to be, and a lot more powerful. The fastest computer in the world, Blue Gene/L, runs at nearly 300 teraflops, or 300 trillion floating point operations per second. The real revolution is that regular computer servers have become HPCs through parallel architectures, increasing industrial and market penetration.

A small cluster of Linux boxes — 32 regular servers — now outperforms the world's fastest computers from only a few years ago at 1/100th of the cost. They also are compact and easily serviced. A 128-cluster computer would take only three or four racks, easily fitting in a kitchen. These machines, the “big iron” of the world, have become readily available and powerful tools to tackle tough exploration, drilling and production problems.

Figure 1 shows the Thunder Linux cluster at Lawrence Livermore National Laboratory (LLNL). It is an 18-teraflop machine with more than 1,000 nodes and 4,000 central processing units, and ranks as the 11th fastest computer in the world. However, Thunder is about to be surpassed by an even faster and more powerful cluster system now being built for Lawrence Livermore. In late June, the Peloton supercomputing project was awarded to Appro for three 1U Quad XtremeServer clusters with a total of 16,128 cores based on next-generation AMD Opteron processors with DDR2 memory. To provide a production quality computing capacity, Peloton features a novel architecture that groups identical scalable units of 1,152 cores to form three shared-memory multiprocessor clusters.

Appro cluster

The Peloton clusters will be used in an unclassified environment as a multi-programmatic and institutional (M&IC) resource and in the classified environment to solve complex computational problems related to the National Nuclear Security Administration's (NNSA) Stockpile Stewardship Program. This program ensures the safety, security and reliability of the nation's nuclear deterrent.  Identical scalable units with 1,152 cores will be grouped together to form the three shared memory multiprocessor clusters. Multiple organizations and programs within LLNL will share these supercomputing clusters for large, medium and small scale scientific simulations.

With scalable computing power at affordable pricing points, it is not surprising that massively parallel computers are becoming more common in oil and gas companies and their allied service companies. They mostly operate in seismic processing, although they also tackle problems from financial modeling to molecular chemistry. And more and more companies are looking to HPCs to solve tough problems in reservoir characterization and management. The reasons are simple: improved recovery, reserves stewardship and cost reduction.

Like any tool, however, they must be pointed at the right problem and operated well. Despite the high power and low cost of high performance computers, any commercial oil and gas company must understand why it should buy a machine, what it could do with one, and how it would fit sensibly into its business model. It must also know how to deploy the techs and scientists hired to work these machines. This is where the challenges to conventional operations and approaches can inform smart business how to wield big iron to solve big problems and turn big profits.

Two areas come to the fore. First, how can one handle uncertainty in the subsurface and in geophysical interpretation? Second, how can one simulate reservoirs in the increasingly difficult operational environment to obtain extremely high recoveries?

The Realm Of Uncertainty

Workers in the subsurface know only one thing with certainty: They are wrong. No one knows what the rocks and fluids truly look like between wells. Common unknowns are saturations, lithologic distributions, fracture character and geometry, and large-scale connectivity. Even the very best geophysics and geological concepts still cannot shake the irreducible uncertainty in a single geological or reservoir model.

So why should a company limit itself to one? Or 10,000?

Stochastic integration and inversion are an approach that tackles this uncertainty head-on. Essentially, it generates thousands of forward models of some specific property, say, porosity, oil saturation or CO2 distribution, as examples. The inputs are trusted data such as well data, seismic constraints or production data, while the outputs are a handful of configurations that match all data, with a strict probabilistic ranking. This provides an operator with not one “best” model, but with several alternatives and their likelihoods. These models may vary in rock distribution, velocity or fluid properties in ways that are readily tested.

This gets then to the heart of many industrial problems: What information is needed to make large business decisions? Stochastic inversion can be applied to early seismic processing (exploration), post-discovery development planning, early production verification, history matching, and tertiary recovery planning — in short, every phase of the field life cycle.

In a tertiary recovery project in Wyoming, CO2 was injected and monitored using electrical resistance tomography (ERT) between abandoned wells. The initial, deterministic inversion looked noisy and unimpressive, and data collection ceased. Later, those same data served as the basis for a stochastic inversion. The likeliest solution still showed noise, but there were four other families of solutions, three of which showed a north-to-south trending plume and stimulation of a producing well.

To improve the analysis, another inversion was run with only one more piece of information: the volume of CO2 injected between ERT surveys. Suddenly, the highest probability looked like the north-to-south plume, and a secondary solution identified a possible anomaly around a water injector. One more difference map analysis revealed even higher confidence. The operators are looking at the field data to test the predictions of the inversion.

Figure 2 represents changes in resistivity among 19 abandoned wells after three weeks of injection over the 70-acre study area in the CO2 flood. The left image is the first difference map, showing mostly noise. The two middle maps show the two most likely solutions, noise and a CO2 plume. The right map shows the solution when only the total injection volume constraint was added.

For this case, no new data were collected after the first inversion. Instead, existing data and basic physics constrained the solution space very effectively. It also pointed toward ways to test the predictions of the model in production data and also suggested new analysis. Using this technique allows the operator to leverage off all relevant knowledge of the field and test interpretations that are subject to debate. It also helps inform operators of multiple scenarios and what new information may be needed to choose the most promising course of development. In fact, the less correlated the data sets (e.g., temperature, water cut, tiltmeter, crosswell seismic, etc.) the better the inversion.

Stochastic inversion and integration are superior to conventional inversion and analysis in every way, except one. They are very computationally intensive. A typical stochastic analysis generates thousands of possible solutions. For a stochastic analysis to converge may take hundreds or even thousands of CPU hours. On a conventional workstation, that many CPU hours would require weeks to months to complete.

But this is where HPCs come in. A 256-CPU, 64-node cluster could execute an analysis in hours, depending on the problem. Even including setup, parameterization, I/O and other concerns, a single HPC could tackle 30 to 100 problems a year. Although this may not be enough for every asset within a large company, it may help handle the most difficult cases, the highest-risk projects or the largest few assets within a company.

A World Without Scale Up

Currently, these large assets comprise large reservoirs managed by engineers using large reservoir simulations. In many cases, the workflow for these simulations has not changed in years: Build a geological concept from the data, build a detailed static geological model from those concepts, scale up to a full-flow reservoir model, and someday attempt a history match. Many of these steps imbed assumptions that cannot be verified, including relative permeability and scaling coefficients. The more of these assumptions introduced, the less unique the solutions for a given reservoir simulation.

One approach is to not make the assumptions. Instead, brute force can be used to run very large simulations where the best geological understanding is rendered in detail. Already, models run at higher resolution than in the past. In the case of managing the world's largest asset, the Ghawar Oil Field, Saudi Aramco runs its POWERS simulator on a massively parallel HPC. As of 2004, this 128-node Pentium IV-based machine had run full field simulations with between 10 million and 100 million cells and more than 4,000 wells, with larger runs pending. These simulations are run with multicomponent hydrocarbon models, waterflooding with varying brine chemistries, and dual-perm response to match fracture-flow history. Some runs include CO2 floods.

This capability not only allows Saudi Aramco to run fairly large models with minimal or no scale up, but also to execute history matches extremely rapidly (in some cases, in hours to days). Saudi Aramco has used this capability for infill drilling, water cut management, breakthrough prediction and other basic reservoir engineering choices (Figure 3). New data can then be incorporated into updated geological models that underpin the simulations.

Almost all such full-flow models run on conventional finite volume codes. These have proven to be reliable in most fields. There are exceptions, however. Even in simulations with multicomponent oils, methane, CO2, water and dual-permeability systems, the simulation of many important processes is crude or absent altogether. While that is fine for many conventional cases, some require greater sophistication. This is true of thermal recovery, where extreme temperature and viscosity transients matter. The handling of fracture systems is still poor, with simple continuum models of complex geometries with nonlinear stress/flow response. These models poorly predict dissolution or precipitation resulting from CO2 injection, or  bulk crustal deformation, or scale formation.

These require coupled, complex simulation tools called reactive transport models. Many research versions of these codes exist, including TOUGH2, NUFT, STOMP and others. Some are finite difference codes, some finite element, and some are coupled to discrete fracture and deformation codes. They have one commonality: They all require massively parallel machines to run sophisticated cases of 3-D stratigraphic and structural complexity of most hydrocarbon fields.

Again, for those fields where fractures dominate the flow field, or where chemistry is difficult, HPCs provide a way to target tough local questions that impact cost or operations. For unconventional reserves simulation, such as in-situ oil shale recovery, steam injection into thermal diatomite, or enhanced coalbed methane recovery, advanced simulators on massively parallel platforms provide the hope of tackling tough operational problems, such as reducing and mitigating well failure events, and substantially improving recovery factors.

Big Iron And The Future

Could these areas be combined and optimized? One can certainly imagine using some kind of stochastic integration to provide an initial reservoir field model, which is updated with in-field information and advanced simulations run on the fly. Sequential stochastic runs reduce cycle time, allowing for additional reservoir detail and physical and chemical processes to enter models as necessary.

In all cases, information is processed and mapped to optimize around changing parameters (production rate, maximum recovery, environmental integrity, etc.). Even this complex scenario could be managed by a fairly small HPC, perhaps 32 to 64 nodes, for a medium size field. While this scenario is not yet in operation, all the components exist and could be integrated quickly and easily. One can imagine how this workflow could lead to substantial improvements in total recovery and operating cost reduction.

As mentioned, HPC is not all things for all cases. It is best used for managing specific projects or assets of greatest risk or greatest value. Saudi Aramco chiefly built its computer and simulator to model Ghawar. Even if a company does not have an asset like Ghawar or Prudhoe Bay, it still will have ventures or operations that represent a major investment. HPC applications can help reduce the risk and improve the performance of these projects.

Increased competition helps. Competition between chip makers Intel and AMD has not only dropped prices, but also produced common architectures that can handle a wide range of realistic technical challenges. As such, the challenge to operators and researchers alike is one of fit. What is the real problem, and what is a smart approach to solve it?

This is likely to require some new thinking about the exploration, drilling and production workflow. Is it possible to jump past steps such as creating a complex static geomodel? What is the value of upscaling, and can it be avoided? How are operational data sets measured and tracked? Ultimately, the value of HPC applications is only how they affect the value chain to reduce the cost of operations or cycle time. Identifying the key technical choke points in the business and rethinking the technical workflow can help focus the big iron to produce something novel, sexy and powerful. Something useful.

And something smart.


The author acknowledges Roger Aines, Steve Ashby, Bill Boas, Garfield Bowen, Ali Dogru and Abe Ramirez for discussions leading to this article. He also thanks Appro and Anadarko for supporting these technologies and research. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract W-7405-ENG-48.

Adapted and reprinted with permission from the July issue of The American Oil & Reporter.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Machine Learning at HPC User Forum: Drilling into Specific Use Cases

September 22, 2017

The 66th HPC User Forum held September 5-7, in Milwaukee, Wisconsin, at the elegant and historic Pfister Hotel, highlighting the 1893 Victorian décor and art of “The Grand Hotel Of The West,” contrasted nicely with Read more…

By Arno Kolster

Google Cloud Makes Good on Promise to Add Nvidia P100 GPUs

September 21, 2017

Google has taken down the notice on its cloud platform website that says Nvidia Tesla P100s are “coming soon.” That's because the search giant has announced the beta launch of the high-end P100 Nvidia Tesla GPUs on t Read more…

By George Leopold

Cray Wins $48M Supercomputer Contract from KISTI

September 21, 2017

It was a good day for Cray which won a $48 million contract from the Korea Institute of Science and Technology Information (KISTI) for a 128-rack CS500 cluster supercomputer. The new system, equipped with Intel Xeon Scal Read more…

By John Russell

HPE Extreme Performance Solutions

HPE Prepares Customers for Success with the HPC Software Portfolio

High performance computing (HPC) software is key to harnessing the full power of HPC environments. Development and management tools enable IT departments to streamline installation and maintenance of their systems as well as create, optimize, and run their HPC applications. Read more…

Adolfy Hoisie to Lead Brookhaven’s Computing for National Security Effort

September 21, 2017

Brookhaven National Laboratory announced today that Adolfy Hoisie will chair its newly formed Computing for National Security department, which is part of Brookhaven’s new Computational Science Initiative (CSI). Read more…

By John Russell

Machine Learning at HPC User Forum: Drilling into Specific Use Cases

September 22, 2017

The 66th HPC User Forum held September 5-7, in Milwaukee, Wisconsin, at the elegant and historic Pfister Hotel, highlighting the 1893 Victorian décor and art o Read more…

By Arno Kolster

Stanford University and UberCloud Achieve Breakthrough in Living Heart Simulations

September 21, 2017

Cardiac arrhythmia can be an undesirable and potentially lethal side effect of drugs. During this condition, the electrical activity of the heart turns chaotic, Read more…

By Wolfgang Gentzsch, UberCloud, and Francisco Sahli, Stanford University

PNNL’s Center for Advanced Tech Evaluation Seeks Wider HPC Community Ties

September 21, 2017

Two years ago the Department of Energy established the Center for Advanced Technology Evaluation (CENATE) at Pacific Northwest National Laboratory (PNNL). CENAT Read more…

By John Russell

Exascale Computing Project Names Doug Kothe as Director

September 20, 2017

The Department of Energy’s Exascale Computing Project (ECP) has named Doug Kothe as its new director effective October 1. He replaces Paul Messina, who is stepping down after two years to return to Argonne National Laboratory. Kothe is a 32-year veteran of DOE’s National Laboratory System. Read more…

Takeaways from the Milwaukee HPC User Forum

September 19, 2017

Milwaukee’s elegant Pfister Hotel hosted approximately 100 attendees for the 66th HPC User Forum (September 5-7, 2017). In the original home city of Pabst Blu Read more…

By Merle Giles

Kathy Yelick Charts the Promise and Progress of Exascale Science

September 15, 2017

On Friday, Sept. 8, Kathy Yelick of Lawrence Berkeley National Laboratory and the University of California, Berkeley, delivered the keynote address on “Breakthrough Science at the Exascale” at the ACM Europe Conference in Barcelona. In conjunction with her presentation, Yelick agreed to a short Q&A discussion with HPCwire. Read more…

By Tiffany Trader

DARPA Pledges Another $300 Million for Post-Moore’s Readiness

September 14, 2017

The Defense Advanced Research Projects Agency (DARPA) launched a giant funding effort to ensure the United States can sustain the pace of electronic innovation vital to both a flourishing economy and a secure military. Under the banner of the Electronics Resurgence Initiative (ERI), some $500-$800 million will be invested in post-Moore’s Law technologies. Read more…

By Tiffany Trader

IBM Breaks Ground for Complex Quantum Chemistry

September 14, 2017

IBM has reported the use of a novel algorithm to simulate BeH2 (beryllium-hydride) on a quantum computer. This is the largest molecule so far simulated on a quantum computer. The technique, which used six qubits of a seven-qubit system, is an important step forward and may suggest an approach to simulating ever larger molecules. Read more…

By John Russell

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

Six Exascale PathForward Vendors Selected; DoE Providing $258M

June 15, 2017

The much-anticipated PathForward awards for hardware R&D in support of the Exascale Computing Project were announced today with six vendors selected – AMD Read more…

By John Russell

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers. Read more…

By Doug Black

Top500 Results: Latest List Trends and What’s in Store

June 19, 2017

Greetings from Frankfurt and the 2017 International Supercomputing Conference where the latest Top500 list has just been revealed. Although there were no major Read more…

By Tiffany Trader

IBM Clears Path to 5nm with Silicon Nanosheets

June 5, 2017

Two years since announcing the industry’s first 7nm node test chip, IBM and its research alliance partners GlobalFoundries and Samsung have developed a proces Read more…

By Tiffany Trader

Leading Solution Providers

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

GlobalFoundries: 7nm Chips Coming in 2018, EUV in 2019

June 13, 2017

GlobalFoundries has formally announced that its 7nm technology is ready for customer engagement with product tape outs expected for the first half of 2018. The Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This