Visit additional Tabor Communication Publications
September 15, 2006
Chevron and two of its partners recently discovered a new field in Gulf of Mexico deepwater that could yield 3-15 billion barrels of oil, boosting U.S. reserves by up to half. At the Council on Competitiveness' HPC Users Conference on September 7, Chevron CTO, Dr. Donald Paul, gave an impromptu talk about the discovery and the role HPC played.
Donald Paul said HPC was crucial for enabling this important discovery. HPC has been used for seismic processing for many years, but Chevron’s "Jack-2" reservoir and others like it in the Gulf of Mexico deepwater are at the very edge of current seismic imaging capability. Paul explained that imaging at the scale of this project was unprecedented, with data sets up to a quadrillion (10^15) points. Processing such vast data sets was impossible until the past few years brought advances in HPC capabilities and visualization technologies.
The features of the newly discovered reservoir were completely invisible until recently, because of a huge canopy of salt that is sometimes miles thick, and geologists were skeptical about the amount of potential oil in that region. But with high performance computing, what was invisible became clear. "Geology's always been smarter than the geologists," said Paul. "Nature is so complex that our knowledge is very small in comparison. The machines get faster so you can see more, adjust the algorithms, and finally see what you're looking for. What we found is 300 miles long and 100 miles wide." This, he said, has been the whole history of seismic imaging. Seismic imaging isn't an exact science and it's "always a question of which approximation is best." He said Chevron evolves the algorithm every six months, and that this enables them to "just see things that were not visible before."
Once HPC permitted Chevron to "see" the possibilities, the company had the confidence to proceed with the enormously expensive process of drilling a test well. HPC was used again for the even larger challenge of modeling what the drilling process might be like. This computer modeling was done in real time.
Specialized ships were needed to drill through 7,000 feet of water and 20,000 feet of underlying rock. The steel drillstrings were five miles long (8 kilometers) and cost more than $1 billion each. The drilling was fully run by robotics.
The next stage, Paul said, is to model these reservoirs to decide how best to develop them. This will involve simulations with billions of cells. Again, the modeling will not be done in the lab, but "on the front line of production work."
Chevron used its own proprietary software for seismic imaging, on an HPC system that was "a cluster of a few thousand processors." Paul said the discovery "unveils an enormous accumulation trend of oil," but cautioned that "there's a big difference between accumulation and actual oil. We have a long way to go, years really."
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.