Oil and Gas HPC Workshop Highlights Industry Challenges

By Ted Walker

March 14, 2012

Leaders from the oil and gas industry and the high performance computing and information technology industry, as well as academics and representatives from national laboratories, met at Rice University in Houston, Texas, March 1 for the 5th annual Rice Oil and Gas HPC Workshop.

The oil and gas industry depends heavily on high performance computing to spur meaningful returns on its investments in drilling and production. The huge demands on data and processing in the industry are driven by the services that support geophysical mapping, like seismic imaging and reservoir simulation, to help companies assess reservoirs and place wells.

The workshop’s 300 attendees is a record number for the event, which is organized by the Ken Kennedy Institute for Information Technology at Rice University, heard talks from industry and academic leaders, and participated in workshops that delved into tools and techniques leading the way forward in HPC in the oil and gas industry. More than a conference with formal talks, networking and conversation are at the forefront, and vendors and workshop participants from around the world engaged in dialog about opportunities and challenges.

“The growth of the workshop continues to show the importance of high performance computing as a critical business enabler and differentiator in the oil and gas industry, with a well-understood return on investment,” said Jan Odegard, executive director of the Ken Kennedy Institute. “The energy at this year’s conference is a strong indicator of the desire to attack head-on the challenges in applying computing across the oil and gas industry.”

Interactive talks covered a wide range of topics, from algorithm optimization, performance and programming tools, like HPCToolkit and Loo.py, to programming models and languages, such as Co-array Fortran and OpenCL. The wide range of discussion also covered the challenges for developing and managing HPC facilities and infrastructure, and the open-source software framework IWAVE.

The workshop also featured several keynote addresses on emerging HPC and data center challenges in the oil and gas industry:

Cray CEO Ungaro on Oil & Gas: “We’re Back in the Industry”

Peter Ungaro, President and Chief Executive Officer of Cray Inc., delivered the workshop’s opening keynote address, and he spoke about his company’s foray into oil and gas.

“If I was sitting in your chair right now,” Ungaro said, “I would wonder, ‘What are you doing here, Pete? Cray is a company that mostly goes to national laboratories. Were you trying to get to Oak Ridge last night and you got off in Houston by accident? Well, I would tell you that the same challenges that are happening in building some of the biggest supercomputers in the world are going to hit the oil and gas industry.

“From my perspective,” he continued, “[these challenges] are going to change the kind of machines that we put in production within the oil and gas community, and a lot of the requirements are fundamentally going to shift as we go through these changes in processing. The requirements keep rising. The machines that we’re using today are not the kind of machines that we are going to be using in a few years.”

Ungaro also addressed what he called the compelling business case for HPC in the oil and gas industry.

“When we look at the requirements from a compute and data standpoint, they are huge,” he said. “From data acquisition, through seismic, through reservoir simulation, and to downstream needs, that is a very broad set of applications that stresses many different parts of the machine and many different aspects of processing, not only for petaflop-sized systems for doing state-of-the-art computations, but also on the data side. From that perspective I think it’s a very unique kind of model.

An accurate seismic image has huge returns. A well can cost a lot of money, and restating a reserve has some serious business implications. When the requirements and the returns are huge, the demand for getting it right really goes up.”

As systems increase in complexity and integration, Ungaro also noted the need for increased investment in software technology that supports both scale-up and scale-out architectures, as that takes advantage of accelerators — all of this while hiding the level of complexity on the front end.

“There was a time when there were a lot of Cray computers in the oil and gas industry,” Ungaro concluded. “We’re back in the industry.”

IWAVE, an open-source software framework 

William Symes, Director of The Rice Inversion Project and Professor in Computational and Applied Mathematics at Rice University, gave a detailed introduction to IWAVE, the open-source framework for regular grid finite difference modeling.

The IWAVE package evolved from its beginnings as a QC component of the SEG Advanced Modeling (SEAM) project into a framework that can be used to explore new algorithms, code porting, benchmarking architectures and testing tools and new ideas. As a part of several breakout workshops, Ted Barragy of AMD and Murtaza Ali of Texas Instruments (TI) demonstrated the facility of the IWAVE framework in the oil and gas industry by porting components to AMD’s Fusion APU and the TI’s C66x KeyStone-based multicore DSP.

In another workshop talk, John Mellor-Crummey, Professor of Computer Science at Rice University, used IWAVE as an example for how HPCToolkit can be used to gain insight into performance optimization.

Envisioning Exascale

Addressing the issue of the transition from petascale to exascale computing, Intel’s General Manager of the Technical Computing Group, Rajeeb Hazra, looked ahead to the journey towards exascale.

“Why do we care about getting to exascale?” he asked. “Scientists either have more to do, or they need to do the same thing much quicker.” Accepting the assumption that there is a continual need for more performance in HPC, Hazra called attention to the costs of the pace of improvement we’ve already experienced. With the increase in performance, power usage has increased alongside.

“Exascale is not about one computer,” Hazra continued. “Exascale is a statement that means you have hundreds of computers that are a petaflop at half a rack. That’s the promise of exascale. You should look at it as a supply chain that has suddenly been enabled at a much higher level, not just that the top of the supply chain has the biggest computer to run on.

“We’ve gone from a few Kilowatts for the largest computing systems, to the largest system in existence, which is the K computer in Japan, taking 10 megawatts per ten petaflops. So it looks wonderful, but we’ve done this by scaling out systems faster than Moore’s Law. We have not done what we need to do, which is to change the performance density.

“You have to now approach this problem from a systems perspective,” Hazra continued. “My message to programmers: your mindset needs to change. The more sequential you are, the more problems you are creating for yourself. You are thinking about performance and results in an old paradigm. Whether it’s domain-specific languages or particular pragmas or a library-based approach to getting parallel, you have to be able to express it. Parallelism with no locality, and locality with no parallelism are extremes you have to avoid.” 

He also stressed hardware and software co-design, with a feedback loop between the two parties. “Co-design has to be something that changes what you are doing and what I was going to do in a concerted way,” he said. “It’s not requirements gathering. It’s a tough problem. It can be like trying to get the Congress to work together, but it has to be done.”

The Power Challenge

Los Alamos National Laboratory (LANL) engineers Richard Rivera and Farhad Banisadr discussed the increased demand for data center power, and the engineering practices to meet those needs. Rivera stressed the importance of engineering efficiency into data center management. “We try to be proactive,” he said. “Power is one of the commodities that technology is demanding. We have to put some engineering practices in place to gain efficiencies where we can.”

Careful planning marks the approach for Rivera and Banisadr, who use modeling software to maximize efficiency, and to anticipate HPC needs years ahead of installation. Networked sensors at the LANL data center profile temperatures, humidity and air pressure under the data center floors at any point. These sensors allow engineers to analyze the efficiency of their cooling systems and to tune the facility to optimize efficiency.

Another trend points to liquid cooling. “With the power densities [in data centers] going up towards 100 kilowatts per rack,” said Banisadr, “we see the trend moving towards liquid cooling. With the existing air-handling units, a 41-ton air-handling unit uses roughly 15 to 18 percent of its own capacity to remove the heat from its own electric motors.”

At LANL, the amount of power to cool these less efficient units translates to around 2 megawatts of power at a roughly 19 Megawatt facility. This is not sustainable if we project towards exascale by end of the decade.

Ideas in development to increase efficiency include advanced machine scheduling to maximize capabilities, modular and customizable power center technology and more cooperation and integration between facilities, power, cooling and IT teams.

The long-term future of data center power is clear, even if the path to that future is not. “We have a plan,” said Rivera, “and that is to bring in more power.”

—– 

The 2012 Rice Oil and Gas HPC Workshop was, as it has been in the past, organized by a team that includes Henri Calandra of Total, Keith Gray of BP, David Judson of WesternGeco, Bill Menger of Weinman Geoscience, Scott Morton of Hess Corp., Chap Wong of Chevron, and Jan E. Odegard of Rice University.

Presentations and archived webcasts will shortly be available at og-hpc.org.

Next year’s Oil and Gas HPC Workshop will take place at Rice University on Thursday, February 28, 2013.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

HPC Iron, Soft, Data, People – It Takes an Ecosystem!

December 11, 2017

Cutting edge advanced computing hardware (aka big iron) does not stand by itself. These computers are the pinnacle of a myriad of technologies that must be carefully woven together by people to create the computational c Read more…

By Alex R. Larzelere

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit and Sierra. The new AC922 server pairs two Power9 CPUs with f Read more…

By Tiffany Trader

PEZY President Arrested, Charged with Fraud

December 6, 2017

The head of Japanese supercomputing firm PEZY Computing was arrested Tuesday on suspicion of defrauding a government institution of 431 million yen (~$3.8 million). According to reports in the Japanese press, PEZY founde Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

Unleash the Next Generation of HPC with Memory-Driven Compute

Today’s enterprises are faced with an ever-growing volume of data that contains immense value and intelligence for those who can properly collect, process, and store it. Read more…

Azure Debuts AMD EPYC Instances for Storage Optimized Workloads

December 5, 2017

AMD’s return to the data center received a boost today when Microsoft Azure announced introduction of instances based on AMD’s EPYC microprocessors. The new instances – Lv2-Series of Virtual Machine – use the EPY Read more…

By John Russell

HPC Iron, Soft, Data, People – It Takes an Ecosystem!

December 11, 2017

Cutting edge advanced computing hardware (aka big iron) does not stand by itself. These computers are the pinnacle of a myriad of technologies that must be care Read more…

By Alex R. Larzelere

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Microsoft Spins Cycle Computing into Core Azure Product

December 5, 2017

Last August, cloud giant Microsoft acquired HPC cloud orchestration pioneer Cycle Computing. Since then the focus has been on integrating Cycle’s organization Read more…

By John Russell

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

HPE In-Memory Platform Comes to COSMOS

November 30, 2017

Hewlett Packard Enterprise is on a mission to accelerate space research. In August, it sent the first commercial-off-the-shelf HPC system into space for testing Read more…

By Tiffany Trader

SC17 Cluster Competition: Who Won and Why? Results Analyzed and Over-Analyzed

November 28, 2017

Everyone by now knows that Nanyang Technological University of Singapore (NTU) took home the highest LINPACK Award and the Overall Championship from the recently concluded SC17 Student Cluster Competition. We also already know how the teams did in the Highest LINPACK and Highest HPCG competitions, with Nanyang grabbing bragging rights for both benchmarks. Read more…

By Dan Olds

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

SC Bids Farewell to Denver, Heads to Dallas for 30th Anniversary

November 17, 2017

After a jam-packed four-day expo and intensive six-day technical program, SC17 has wrapped up another successful event that brought together nearly 13,000 visit Read more…

By Tiffany Trader

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

Leading Solution Providers

SC17 Booth Video Tours Playlist

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Share This