Beyond Seismic – AI Joins HPC In The Instrumented Oil Field

By Gabor Samu, IBM Spectrum Computing

March 27, 2019

Energy companies are no strangers to HPC and big data. Seismic analysis and reservoir modeling applications have relied on HPC applications for decades. The landscape of HPC is changing, however. Modern oil fields have become highly instrumented and combining sensor data with geological and seismic data provides new opportunities to boost productivity and reduce cost. In this article, we’ll explain how new applications of machine learning and AI are changing oil and gas exploration and impacting HPC datacenters.

Traditional HPC in oil and gas exploration

3-D seismic surveys involve creating a shock-wave on the surface and recording returns using geophones. By analyzing the time it takes for waves to reflect off of sub-surface features, geophysicists can gain an understanding of the geology of an oil field and identify promising locations to sink exploratory wells. The data gathered from a single seismic survey can be in the range of a Petabyte or more.

Using a seismic survey, reservoir modeling involves evaluating various strategies for exploiting an oil field using computer simulation. Through simulation, companies identify strategies that will pose the least environmental risk, lowest cost, and that will maximize yield.

With survey costs in the range of $30K per square kilometer and the cost of sinking a well into the hundreds of millions, thorough simulation is essential. More compute and storage capacity means that organizations can run more detailed simulations, have higher confidence in where to drill, and perform more “what-if” analysis to improve decisions and avoid costly mistakes.

Read HPCwire article: The Spark That Ignited A New World of Real-Time Analytics

The instrumented oil field

The capital intensive nature of energy exploration provides big financial incentives to automate. Modern oil fields have become highly instrumented with control systems that gather and analyze data from thousands of sensors and control elements that automate processes without human intervention.

Whether onshore or offshore, almost all parameters affecting operations are measured and recorded in real time – from inflow and outflow pressures in pipes and hoses, strokes and operating temperatures of various pumps, bit depth and torque sensors on drilling assemblies, to sensors that monitor tension on cables.

Sensor data is typically logged for diagnostic purposes and downstream analysis. A consequence of all this automation is that in addition to seismic and geological data, energy companies collect vast amounts of sensor data that needs to be processed, aggregated, stored and analyzed.

The impact of machine learning and AI

With so much data on hand, organizations have a strong incentive to put it to work. Recent advances in big data and analytics including software frameworks such as Spark have made it easier to manipulate very large data sets. Also, a variety of powerful AI frameworks are creating new opportunities to gain insights from data. AI can help streamline exploration and discovery and make recovery operations more efficient resulting in higher yield and profitability.

When we think of AI, we might think of expert systems, autonomous drones inspecting pipelines, or specialized robots operating on the ocean floor. Machine learning is proving to be valuable in a wide variety of areas, however. As some specific examples:

  • Improving the quality of reservoir models – Ideally, energy companies would like to use real rock property values obtained from physical core samples, but the lab time and analysis required makes this cost prohibitive. When drilling, sensors gather enormous amounts of log data. Trained machine learning models can predict rock properties based on electronically gathered log data, enabling us to improve reservoir model data quality for higher yield at less cost.
  • Predict the net present value for various drilling locations – Determining where and how to drill in a known field to maximize net present value (NPV) has enormous value. Given data from previously completed wells, a model can be trained to predict the first six-month cumulative oil production from a target well considering details like geological, drilling, and seismic data helping firms make better decisions and avoid unproductive wells.
  • Predictive maintenance – Machine learning algorithms can combine equipment sensor data with maintenance data to predict failures, determine optimal preventative maintenance strategies, maximize the life of machinery, and proactively replace near end-of-life components before failures can disrupt operations.

Read HPCwire article: The Secret to Faster, More Accurate AI: Elastic Training

New technology provides new opportunities

Just as GPUs are revolutionizing other aspects of HPC and AI, they are changing energy exploration as well. IBM and Stone Ridge Technology broke the world record for running a billion-cell reservoir simulation model using the Stone Ridge Technology ECHELON reservoir simulation software. The simulation ran in 92 minutes using 90 IBM Power Processors and 120 NVIDIA GPUs beating a previous 20-hour result that required a cluster with 716,800 cores. In addition to running 10x faster, the simulation required just 1/10th the electrical power and 1/100th the data center space of traditional HPC cluster solutions.

A consequence of this improvement in density and power efficiency is that it becomes much more practical to forward deploy HPC capacity closer to exploration sites including on offshore platforms. This can help reduce the need to transfer massive amounts of data to remote data centers or clouds improving turnaround time and further reducing costs.

New challenges for HPC data centers

New GPU-aware applications and opportunities to exploit sensor data and use new predictive models are affecting how HPC clusters are deployed and managed. HPC environments need to support not only seismic and reservoir modeling simulations, but a variety of big data, analytic, and machine learning environments as well.

Data is a critical challenge. HPC environments need to efficiently store and retrieve diverse data using multiple access methods and storage models including POSIX file systems, object stores, time-series data stores, and HDFS filesystems.

Supporting the full range of oil & gas workloads

IBM Spectrum Computing can help energy companies consolidate the full range of HPC, big data, analytic, and machine learning workloads on a shared environment.

IBM Spectrum LSF is widely used for a variety of applications including reservoir simulation and seismic applications. Complementary solutions such as IBM Spectrum Conductor and IBM Watson Machine Learning Accelerator add support for additional frameworks so that infrastructure can be shared seamlessly between simulation, big data and analytics, and GPU-intensive distributed machine learning frameworks for great efficiency.

IBM Spectrum Scale and IBM Spectrum LSF Data Manager provide high-performance file and object storage with efficient data replication and workload-aware policy-based data movement between local clusters and remote data centers.

HPC in oil and gas exploration is evolving. While core HPC applications aren’t going anywhere, they’re rapidly being augmented with new analytic and machine learning models. IBM Spectrum Computing, IBM Spectrum Scale and IBM Power Systems can help energy companies navigate this transition delivering an efficient shared compute and storage foundation for the full range of HPC workloads on-site, in your data center, or your choice of clouds.

 

 

Return to Solution Channel Homepage

IBM Resources

Follow @IBMSystems

IBM Systems on Facebook

Do NOT follow this link or you will be banned from the site!
Share This