Chevron and two of its partners recently discovered a new field in Gulf of Mexico deepwater that could yield 3-15 billion barrels of oil, boosting U.S. reserves by up to half. At the Council on Competitiveness' HPC Users Conference on September 7, Chevron CTO, Dr. Donald Paul, gave an impromptu talk about the discovery and the role HPC played.
Donald Paul said HPC was crucial for enabling this important discovery. HPC has been used for seismic processing for many years, but Chevron’s “Jack-2” reservoir and others like it in the Gulf of Mexico deepwater are at the very edge of current seismic imaging capability. Paul explained that imaging at the scale of this project was unprecedented, with data sets up to a quadrillion (10^15) points. Processing such vast data sets was impossible until the past few years brought advances in HPC capabilities and visualization technologies.
The features of the newly discovered reservoir were completely invisible until recently, because of a huge canopy of salt that is sometimes miles thick, and geologists were skeptical about the amount of potential oil in that region. But with high performance computing, what was invisible became clear. “Geology's always been smarter than the geologists,” said Paul. “Nature is so complex that our knowledge is very small in comparison. The machines get faster so you can see more, adjust the algorithms, and finally see what you're looking for. What we found is 300 miles long and 100 miles wide.” This, he said, has been the whole history of seismic imaging. Seismic imaging isn't an exact science and it's “always a question of which approximation is best.” He said Chevron evolves the algorithm every six months, and that this enables them to “just see things that were not visible before.”
Once HPC permitted Chevron to “see” the possibilities, the company had the confidence to proceed with the enormously expensive process of drilling a test well. HPC was used again for the even larger challenge of modeling what the drilling process might be like. This computer modeling was done in real time.
Specialized ships were needed to drill through 7,000 feet of water and 20,000 feet of underlying rock. The steel drillstrings were five miles long (8 kilometers) and cost more than $1 billion each. The drilling was fully run by robotics.
The next stage, Paul said, is to model these reservoirs to decide how best to develop them. This will involve simulations with billions of cells. Again, the modeling will not be done in the lab, but “on the front line of production work.”
Chevron used its own proprietary software for seismic imaging, on an HPC system that was “a cluster of a few thousand processors.” Paul said the discovery “unveils an enormous accumulation trend of oil,” but cautioned that “there's a big difference between accumulation and actual oil. We have a long way to go, years really.”