Oil and Gas HPC Workshop Highlights Industry Challenges

By Ted Walker

March 14, 2012

Leaders from the oil and gas industry and the high performance computing and information technology industry, as well as academics and representatives from national laboratories, met at Rice University in Houston, Texas, March 1 for the 5th annual Rice Oil and Gas HPC Workshop.

The oil and gas industry depends heavily on high performance computing to spur meaningful returns on its investments in drilling and production. The huge demands on data and processing in the industry are driven by the services that support geophysical mapping, like seismic imaging and reservoir simulation, to help companies assess reservoirs and place wells.

The workshop’s 300 attendees is a record number for the event, which is organized by the Ken Kennedy Institute for Information Technology at Rice University, heard talks from industry and academic leaders, and participated in workshops that delved into tools and techniques leading the way forward in HPC in the oil and gas industry. More than a conference with formal talks, networking and conversation are at the forefront, and vendors and workshop participants from around the world engaged in dialog about opportunities and challenges.

“The growth of the workshop continues to show the importance of high performance computing as a critical business enabler and differentiator in the oil and gas industry, with a well-understood return on investment,” said Jan Odegard, executive director of the Ken Kennedy Institute. “The energy at this year’s conference is a strong indicator of the desire to attack head-on the challenges in applying computing across the oil and gas industry.”

Interactive talks covered a wide range of topics, from algorithm optimization, performance and programming tools, like HPCToolkit and Loo.py, to programming models and languages, such as Co-array Fortran and OpenCL. The wide range of discussion also covered the challenges for developing and managing HPC facilities and infrastructure, and the open-source software framework IWAVE.

The workshop also featured several keynote addresses on emerging HPC and data center challenges in the oil and gas industry:

Cray CEO Ungaro on Oil & Gas: “We’re Back in the Industry”

Peter Ungaro, President and Chief Executive Officer of Cray Inc., delivered the workshop’s opening keynote address, and he spoke about his company’s foray into oil and gas.

“If I was sitting in your chair right now,” Ungaro said, “I would wonder, ‘What are you doing here, Pete? Cray is a company that mostly goes to national laboratories. Were you trying to get to Oak Ridge last night and you got off in Houston by accident? Well, I would tell you that the same challenges that are happening in building some of the biggest supercomputers in the world are going to hit the oil and gas industry.

“From my perspective,” he continued, “[these challenges] are going to change the kind of machines that we put in production within the oil and gas community, and a lot of the requirements are fundamentally going to shift as we go through these changes in processing. The requirements keep rising. The machines that we’re using today are not the kind of machines that we are going to be using in a few years.”

Ungaro also addressed what he called the compelling business case for HPC in the oil and gas industry.

“When we look at the requirements from a compute and data standpoint, they are huge,” he said. “From data acquisition, through seismic, through reservoir simulation, and to downstream needs, that is a very broad set of applications that stresses many different parts of the machine and many different aspects of processing, not only for petaflop-sized systems for doing state-of-the-art computations, but also on the data side. From that perspective I think it’s a very unique kind of model.

An accurate seismic image has huge returns. A well can cost a lot of money, and restating a reserve has some serious business implications. When the requirements and the returns are huge, the demand for getting it right really goes up.”

As systems increase in complexity and integration, Ungaro also noted the need for increased investment in software technology that supports both scale-up and scale-out architectures, as that takes advantage of accelerators — all of this while hiding the level of complexity on the front end.

“There was a time when there were a lot of Cray computers in the oil and gas industry,” Ungaro concluded. “We’re back in the industry.”

IWAVE, an open-source software framework 

William Symes, Director of The Rice Inversion Project and Professor in Computational and Applied Mathematics at Rice University, gave a detailed introduction to IWAVE, the open-source framework for regular grid finite difference modeling.

The IWAVE package evolved from its beginnings as a QC component of the SEG Advanced Modeling (SEAM) project into a framework that can be used to explore new algorithms, code porting, benchmarking architectures and testing tools and new ideas. As a part of several breakout workshops, Ted Barragy of AMD and Murtaza Ali of Texas Instruments (TI) demonstrated the facility of the IWAVE framework in the oil and gas industry by porting components to AMD’s Fusion APU and the TI’s C66x KeyStone-based multicore DSP.

In another workshop talk, John Mellor-Crummey, Professor of Computer Science at Rice University, used IWAVE as an example for how HPCToolkit can be used to gain insight into performance optimization.

Envisioning Exascale

Addressing the issue of the transition from petascale to exascale computing, Intel’s General Manager of the Technical Computing Group, Rajeeb Hazra, looked ahead to the journey towards exascale.

“Why do we care about getting to exascale?” he asked. “Scientists either have more to do, or they need to do the same thing much quicker.” Accepting the assumption that there is a continual need for more performance in HPC, Hazra called attention to the costs of the pace of improvement we’ve already experienced. With the increase in performance, power usage has increased alongside.

“Exascale is not about one computer,” Hazra continued. “Exascale is a statement that means you have hundreds of computers that are a petaflop at half a rack. That’s the promise of exascale. You should look at it as a supply chain that has suddenly been enabled at a much higher level, not just that the top of the supply chain has the biggest computer to run on.

“We’ve gone from a few Kilowatts for the largest computing systems, to the largest system in existence, which is the K computer in Japan, taking 10 megawatts per ten petaflops. So it looks wonderful, but we’ve done this by scaling out systems faster than Moore’s Law. We have not done what we need to do, which is to change the performance density.

“You have to now approach this problem from a systems perspective,” Hazra continued. “My message to programmers: your mindset needs to change. The more sequential you are, the more problems you are creating for yourself. You are thinking about performance and results in an old paradigm. Whether it’s domain-specific languages or particular pragmas or a library-based approach to getting parallel, you have to be able to express it. Parallelism with no locality, and locality with no parallelism are extremes you have to avoid.” 

He also stressed hardware and software co-design, with a feedback loop between the two parties. “Co-design has to be something that changes what you are doing and what I was going to do in a concerted way,” he said. “It’s not requirements gathering. It’s a tough problem. It can be like trying to get the Congress to work together, but it has to be done.”

The Power Challenge

Los Alamos National Laboratory (LANL) engineers Richard Rivera and Farhad Banisadr discussed the increased demand for data center power, and the engineering practices to meet those needs. Rivera stressed the importance of engineering efficiency into data center management. “We try to be proactive,” he said. “Power is one of the commodities that technology is demanding. We have to put some engineering practices in place to gain efficiencies where we can.”

Careful planning marks the approach for Rivera and Banisadr, who use modeling software to maximize efficiency, and to anticipate HPC needs years ahead of installation. Networked sensors at the LANL data center profile temperatures, humidity and air pressure under the data center floors at any point. These sensors allow engineers to analyze the efficiency of their cooling systems and to tune the facility to optimize efficiency.

Another trend points to liquid cooling. “With the power densities [in data centers] going up towards 100 kilowatts per rack,” said Banisadr, “we see the trend moving towards liquid cooling. With the existing air-handling units, a 41-ton air-handling unit uses roughly 15 to 18 percent of its own capacity to remove the heat from its own electric motors.”

At LANL, the amount of power to cool these less efficient units translates to around 2 megawatts of power at a roughly 19 Megawatt facility. This is not sustainable if we project towards exascale by end of the decade.

Ideas in development to increase efficiency include advanced machine scheduling to maximize capabilities, modular and customizable power center technology and more cooperation and integration between facilities, power, cooling and IT teams.

The long-term future of data center power is clear, even if the path to that future is not. “We have a plan,” said Rivera, “and that is to bring in more power.”

—– 

The 2012 Rice Oil and Gas HPC Workshop was, as it has been in the past, organized by a team that includes Henri Calandra of Total, Keith Gray of BP, David Judson of WesternGeco, Bill Menger of Weinman Geoscience, Scott Morton of Hess Corp., Chap Wong of Chevron, and Jan E. Odegard of Rice University.

Presentations and archived webcasts will shortly be available at og-hpc.org.

Next year’s Oil and Gas HPC Workshop will take place at Rice University on Thursday, February 28, 2013.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Is Data Science the Fourth Pillar of the Scientific Method?

April 18, 2019

Nvidia CEO Jensen Huang revived a decade-old debate last month when he said that modern data science (AI plus HPC) has become the fourth pillar of the scientific method. While some disagree with the notion that statistic Read more…

By Alex Woodie

At ASF 2019: The Virtuous Circle of Big Data, AI and HPC

April 18, 2019

We've entered a new phase in IT -- in the world, really -- where the combination of big data, artificial intelligence, and high performance computing is pushing the bounds of what's possible in business and science, in w Read more…

By Alex Woodie with Doug Black and Tiffany Trader

Google Open Sources TensorFlow Version of MorphNet DL Tool

April 18, 2019

Designing optimum deep neural networks remains a non-trivial exercise. “Given the large search space of possible architectures, designing a network from scratch for your specific application can be prohibitively expens Read more…

By John Russell

HPE Extreme Performance Solutions

HPE and Intel® Omni-Path Architecture: How to Power a Cloud

Learn how HPE and Intel® Omni-Path Architecture provide critical infrastructure for leading Nordic HPC provider’s HPCFLOW cloud service.

powercloud_blog.jpgFor decades, HPE has been at the forefront of high-performance computing, and we’ve powered some of the fastest and most robust supercomputers in the world. Read more…

IBM Accelerated Insights

Bridging HPC and Cloud Native Development with Kubernetes

The HPC community has historically developed its own specialized software stack including schedulers, filesystems, developer tools, container technologies tuned for performance and large-scale on-premises deployments. Read more…

Interview with 2019 Person to Watch Michela Taufer

April 18, 2019

Today, as part of our ongoing HPCwire People to Watch focus series, we are highlighting our interview with 2019 Person to Watch Michela Taufer. Michela -- the General Chair of SC19 -- is an ACM Distinguished Scientist. Read more…

By HPCwire Editorial Team

At ASF 2019: The Virtuous Circle of Big Data, AI and HPC

April 18, 2019

We've entered a new phase in IT -- in the world, really -- where the combination of big data, artificial intelligence, and high performance computing is pushing Read more…

By Alex Woodie with Doug Black and Tiffany Trader

Interview with 2019 Person to Watch Michela Taufer

April 18, 2019

Today, as part of our ongoing HPCwire People to Watch focus series, we are highlighting our interview with 2019 Person to Watch Michela Taufer. Michela -- the Read more…

By HPCwire Editorial Team

Intel Gold U-Series SKUs Reveal Single Socket Intentions

April 18, 2019

Intel plans to jump into the single socket market with a portion of its just announced Cascade Lake microprocessor line according to one media report. This isn Read more…

By John Russell

BSC Researchers Shrink Floating Point Formats to Accelerate Deep Neural Network Training

April 15, 2019

Sometimes calculating solutions as precisely as a computer can wastes more CPU resources than is necessary. A case in point is with deep learning. In early stag Read more…

By Ken Strandberg

Intel Extends FPGA Ecosystem with 10nm Agilex

April 11, 2019

The insatiable appetite for higher throughput and lower latency – particularly where edge analytics and AI, network functions, or for a range of datacenter ac Read more…

By Doug Black

Nvidia Doubles Down on Medical AI

April 9, 2019

Nvidia is collaborating with medical groups to push GPU-powered AI tools into clinical settings, including radiology and drug discovery. The GPU leader said Monday it will collaborate with the American College of Radiology (ACR) to provide clinicians with its Clara AI tool kit. The partnership would allow radiologists to leverage AI techniques for diagnostic imaging using their own clinical data. Read more…

By George Leopold

Digging into MLPerf Benchmark Suite to Inform AI Infrastructure Decisions

April 9, 2019

With machine learning and deep learning storming into the datacenter, the new challenge is optimizing infrastructure choices to support diverse ML and DL workfl Read more…

By John Russell

AI and Enterprise Datacenters Boost HPC Server Revenues Past Expectations – Hyperion

April 9, 2019

Building on the big year of 2017 and spurred in part by the convergence of AI and HPC, global revenue for high performance servers jumped 15.6 percent last year Read more…

By Doug Black

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized si Read more…

By Tiffany Trader

France to Deploy AI-Focused Supercomputer: Jean Zay

January 22, 2019

HPE announced today that it won the contract to build a supercomputer that will drive France’s AI and HPC efforts. The computer will be part of GENCI, the Fre Read more…

By Tiffany Trader

Intel Launches Cascade Lake Xeons with Up to 56 Cores

April 2, 2019

At Intel's Data-Centric Innovation Day in San Francisco (April 2), the company unveiled its second-generation Xeon Scalable (Cascade Lake) family and debuted it Read more…

By Tiffany Trader

Oil and Gas Supercloud Clears Out Remaining Knights Landing Inventory: All 38,000 Wafers

March 13, 2019

The McCloud HPC service being built by Australia’s DownUnder GeoSolutions (DUG) outside Houston is set to become the largest oil and gas cloud in the world th Read more…

By Tiffany Trader

HPC Reflections and (Mostly Hopeful) Predictions

December 19, 2018

So much ‘spaghetti’ gets tossed on walls by the technology community (vendors and researchers) to see what sticks that it is often difficult to peer through Read more…

By John Russell

Air Force Research Laboratory Unveils First Shared, Classified DoD HPC Capability

February 28, 2019

In a ceremony on Tuesday, the Air Force Research Laboratory unveiled four new computing clusters, providing the capability for what it is calling the first-ever Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This