At the beginning of March I attended the Rice Oil & Gas HPC conference in Houston. That seems a long time ago now. It’s a great event where oil and gas specialists join with compute veterans and the discussion tells you what the year is going to bring. At least most years it does.
In March the main topic was deep sea drilling in the Gulf of Mexico. It is now possible to position a drill bit to within feet of your target, with drilling gulf oil fields 30 thousand feet below water. In the early 2000s the industry was drilling around 10-20 thousand feet below water, but in around 2010 the Paleogene, Norphlet and Cretaceous layers became accessible.
These discoveries and the leaps and bounds in sensor and imaging technology mean that it is now practical to exploit deep sub-sea resources in ways not possible before. In particular the ability to reliably process signals reflected back through the salt layer has been a critical advance. It is an easier material to drill through, but had many impurities and ten years ago it was not possible to create reliable imagery of what was down there. Seismic processing is the technology that has had the biggest impact on pushing what is possible with permanent seismic arrays reducing costs by 70%. Seismic and reservoir simulations are combined to monitor how the oil and gas is moving to increase production.
None of this would be possible without a big hunk of metal. At BP for example, they have 6,500 compute nodes serving 400 users, with around 40 active at once. They run 50-100 thousand jobs per day. Some are single core, but more are some MPI. This adds up to around 3 million jobs per year. That is a huge challenge for the team.
And then the world changed…
But two months ago the world was a very different place and the worldwide lockdown caused by the Covid19 virus has dramatically reduced oil consumption faster than the industry can cut back on production. The oil storage facilities are full and the price of oil in the us dipped below zero for the first time in history. How long that will last, no one can really tell you, but the world is not going to be the same when it is over and the hope is that we will never return to previous levels of fossil fuel consumption.
Joining the global fight, several energy companies have donated spare capacity on their supercomputers to help research into the virus and potential treatments. BP is contributing to the US-based COVID-19 High Performance Computing Consortium, and in Italy Eni has joined the EU-backed EXSCALATE4CoV project.
What does this mean for HPC in the energy sector long term?
As oil and gas wells are shut in, reservoir performance starts to be impacted and oil/gas flow responds in unpredictable ways, we can expect to see much more use of digital and computational power, replacing old fashioned horse power turning drill bits.
The industry is not used to shutting in oil and gas wells that cost millions of dollars to drill and complete, and understanding reservoir performance will be critical to ensuring that those hydrocarbon resources maintain as much long term value as possible.
Oil companies will be looking at both measurable features such as well pressure, flow testing and other surface based telemetry to help understand how reservoirs are likely to respond to a restarting of production. However, visualizing the flow of hydrocarbons beneath the surface will continue to rely on imaging, both in terms of reprocessing existing data, and collecting additional data.
Building models of reservoir performance in this new scenario for the industry will involve a complex process of fitting observations with reservoir modelling algorithms. What seems certain is that while some oil and gas reservoirs will recover, with economics that can fit the new world of lower oil and gas prices, others will fail to make the cut. The role of computational techniques in those multi-billion dollar decisions will be crucial.
About the Author
Dr. Rosemary Francis is CEO and founder of Ellexus, the I/O profiling company (www.ellexus.com). Ellexus makes application profiling and monitoring tools to protect storage from rogue jobs and noisy neighbours, make cloud migration easy and allow a cluster to be scaled rapidly. The system- and storage-agnostic tools provide end-to-end visibility into exactly what applications and users are up to. We don’t just give you data about what your programs are doing; our tools include expertise on what is going wrong and how you can fix it.