Oil and Gas HPC Workshop Highlights Industry Challenges

By Ted Walker

March 14, 2012

Leaders from the oil and gas industry and the high performance computing and information technology industry, as well as academics and representatives from national laboratories, met at Rice University in Houston, Texas, March 1 for the 5th annual Rice Oil and Gas HPC Workshop.

The oil and gas industry depends heavily on high performance computing to spur meaningful returns on its investments in drilling and production. The huge demands on data and processing in the industry are driven by the services that support geophysical mapping, like seismic imaging and reservoir simulation, to help companies assess reservoirs and place wells.

The workshop’s 300 attendees is a record number for the event, which is organized by the Ken Kennedy Institute for Information Technology at Rice University, heard talks from industry and academic leaders, and participated in workshops that delved into tools and techniques leading the way forward in HPC in the oil and gas industry. More than a conference with formal talks, networking and conversation are at the forefront, and vendors and workshop participants from around the world engaged in dialog about opportunities and challenges.

“The growth of the workshop continues to show the importance of high performance computing as a critical business enabler and differentiator in the oil and gas industry, with a well-understood return on investment,” said Jan Odegard, executive director of the Ken Kennedy Institute. “The energy at this year’s conference is a strong indicator of the desire to attack head-on the challenges in applying computing across the oil and gas industry.”

Interactive talks covered a wide range of topics, from algorithm optimization, performance and programming tools, like HPCToolkit and Loo.py, to programming models and languages, such as Co-array Fortran and OpenCL. The wide range of discussion also covered the challenges for developing and managing HPC facilities and infrastructure, and the open-source software framework IWAVE.

The workshop also featured several keynote addresses on emerging HPC and data center challenges in the oil and gas industry:

Cray CEO Ungaro on Oil & Gas: “We’re Back in the Industry”

Peter Ungaro, President and Chief Executive Officer of Cray Inc., delivered the workshop’s opening keynote address, and he spoke about his company’s foray into oil and gas.

“If I was sitting in your chair right now,” Ungaro said, “I would wonder, ‘What are you doing here, Pete? Cray is a company that mostly goes to national laboratories. Were you trying to get to Oak Ridge last night and you got off in Houston by accident? Well, I would tell you that the same challenges that are happening in building some of the biggest supercomputers in the world are going to hit the oil and gas industry.

“From my perspective,” he continued, “[these challenges] are going to change the kind of machines that we put in production within the oil and gas community, and a lot of the requirements are fundamentally going to shift as we go through these changes in processing. The requirements keep rising. The machines that we’re using today are not the kind of machines that we are going to be using in a few years.”

Ungaro also addressed what he called the compelling business case for HPC in the oil and gas industry.

“When we look at the requirements from a compute and data standpoint, they are huge,” he said. “From data acquisition, through seismic, through reservoir simulation, and to downstream needs, that is a very broad set of applications that stresses many different parts of the machine and many different aspects of processing, not only for petaflop-sized systems for doing state-of-the-art computations, but also on the data side. From that perspective I think it’s a very unique kind of model.

An accurate seismic image has huge returns. A well can cost a lot of money, and restating a reserve has some serious business implications. When the requirements and the returns are huge, the demand for getting it right really goes up.”

As systems increase in complexity and integration, Ungaro also noted the need for increased investment in software technology that supports both scale-up and scale-out architectures, as that takes advantage of accelerators — all of this while hiding the level of complexity on the front end.

“There was a time when there were a lot of Cray computers in the oil and gas industry,” Ungaro concluded. “We’re back in the industry.”

IWAVE, an open-source software framework 

William Symes, Director of The Rice Inversion Project and Professor in Computational and Applied Mathematics at Rice University, gave a detailed introduction to IWAVE, the open-source framework for regular grid finite difference modeling.

The IWAVE package evolved from its beginnings as a QC component of the SEG Advanced Modeling (SEAM) project into a framework that can be used to explore new algorithms, code porting, benchmarking architectures and testing tools and new ideas. As a part of several breakout workshops, Ted Barragy of AMD and Murtaza Ali of Texas Instruments (TI) demonstrated the facility of the IWAVE framework in the oil and gas industry by porting components to AMD’s Fusion APU and the TI’s C66x KeyStone-based multicore DSP.

In another workshop talk, John Mellor-Crummey, Professor of Computer Science at Rice University, used IWAVE as an example for how HPCToolkit can be used to gain insight into performance optimization.

Envisioning Exascale

Addressing the issue of the transition from petascale to exascale computing, Intel’s General Manager of the Technical Computing Group, Rajeeb Hazra, looked ahead to the journey towards exascale.

“Why do we care about getting to exascale?” he asked. “Scientists either have more to do, or they need to do the same thing much quicker.” Accepting the assumption that there is a continual need for more performance in HPC, Hazra called attention to the costs of the pace of improvement we’ve already experienced. With the increase in performance, power usage has increased alongside.

“Exascale is not about one computer,” Hazra continued. “Exascale is a statement that means you have hundreds of computers that are a petaflop at half a rack. That’s the promise of exascale. You should look at it as a supply chain that has suddenly been enabled at a much higher level, not just that the top of the supply chain has the biggest computer to run on.

“We’ve gone from a few Kilowatts for the largest computing systems, to the largest system in existence, which is the K computer in Japan, taking 10 megawatts per ten petaflops. So it looks wonderful, but we’ve done this by scaling out systems faster than Moore’s Law. We have not done what we need to do, which is to change the performance density.

“You have to now approach this problem from a systems perspective,” Hazra continued. “My message to programmers: your mindset needs to change. The more sequential you are, the more problems you are creating for yourself. You are thinking about performance and results in an old paradigm. Whether it’s domain-specific languages or particular pragmas or a library-based approach to getting parallel, you have to be able to express it. Parallelism with no locality, and locality with no parallelism are extremes you have to avoid.” 

He also stressed hardware and software co-design, with a feedback loop between the two parties. “Co-design has to be something that changes what you are doing and what I was going to do in a concerted way,” he said. “It’s not requirements gathering. It’s a tough problem. It can be like trying to get the Congress to work together, but it has to be done.”

The Power Challenge

Los Alamos National Laboratory (LANL) engineers Richard Rivera and Farhad Banisadr discussed the increased demand for data center power, and the engineering practices to meet those needs. Rivera stressed the importance of engineering efficiency into data center management. “We try to be proactive,” he said. “Power is one of the commodities that technology is demanding. We have to put some engineering practices in place to gain efficiencies where we can.”

Careful planning marks the approach for Rivera and Banisadr, who use modeling software to maximize efficiency, and to anticipate HPC needs years ahead of installation. Networked sensors at the LANL data center profile temperatures, humidity and air pressure under the data center floors at any point. These sensors allow engineers to analyze the efficiency of their cooling systems and to tune the facility to optimize efficiency.

Another trend points to liquid cooling. “With the power densities [in data centers] going up towards 100 kilowatts per rack,” said Banisadr, “we see the trend moving towards liquid cooling. With the existing air-handling units, a 41-ton air-handling unit uses roughly 15 to 18 percent of its own capacity to remove the heat from its own electric motors.”

At LANL, the amount of power to cool these less efficient units translates to around 2 megawatts of power at a roughly 19 Megawatt facility. This is not sustainable if we project towards exascale by end of the decade.

Ideas in development to increase efficiency include advanced machine scheduling to maximize capabilities, modular and customizable power center technology and more cooperation and integration between facilities, power, cooling and IT teams.

The long-term future of data center power is clear, even if the path to that future is not. “We have a plan,” said Rivera, “and that is to bring in more power.”

—– 

The 2012 Rice Oil and Gas HPC Workshop was, as it has been in the past, organized by a team that includes Henri Calandra of Total, Keith Gray of BP, David Judson of WesternGeco, Bill Menger of Weinman Geoscience, Scott Morton of Hess Corp., Chap Wong of Chevron, and Jan E. Odegard of Rice University.

Presentations and archived webcasts will shortly be available at og-hpc.org.

Next year’s Oil and Gas HPC Workshop will take place at Rice University on Thursday, February 28, 2013.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

The Present and Future of AI: A Discussion with HPC Visionary Dr. Eng Lim Goh

November 27, 2020

As HPE’s chief technology officer for artificial intelligence, Dr. Eng Lim Goh devotes much of his time talking and consulting with enterprise customers about how AI can benefit their business operations and products. Read more…

By Todd R. Weiss

SC20 Panel – OK, You Hate Storage Tiering. What’s Next Then?

November 25, 2020

Tiering in HPC storage has a bad rep. No one likes it. It complicates things and slows I/O. At least one storage technology newcomer – VAST Data – advocates dumping the whole idea. One large-scale user, NERSC storage architect Glenn Lockwood sort of agrees. The challenge, of course, is that tiering... Read more…

By John Russell

Exscalate4CoV Runs 70 Billion-Molecule Coronavirus Simulation

November 25, 2020

The winds of the pandemic are changing – for better and for worse. Three viable vaccines now teeter on the brink of regulatory approval, which will pave the way for broad distribution by April or May. But until then, COVID-19 cases are skyrocketing across the U.S. and Europe... Read more…

By Oliver Peckham

Azure Scaled to Record 86,400 Cores for Molecular Dynamics

November 20, 2020

A new record for HPC scaling on the public cloud has been achieved on Microsoft Azure. Led by Dr. Jer-Ming Chia, the cloud provider partnered with the Beckman Institute for Advanced Science and Technology at the Universi Read more…

By Oliver Peckham

Gordon Bell Special Prize Goes to Massive SARS-CoV-2 Simulations

November 19, 2020

2020 has proven a harrowing year – but it has produced remarkable heroes. To that end, this year, the Association for Computing Machinery (ACM) introduced the Gordon Bell Special Prize for High Performance Computing-Ba Read more…

By Oliver Peckham

AWS Solution Channel

Introducing AWS ParallelCluster as an Intel Select Solution

High performance computing (HPC) system owners can spend weeks or months researching, procuring, and assembling components to build HPC clusters to run their workloads. Understanding and managing the complexities of compute, storage, networking, and software requirements can be confusing and time-consuming, slowing innovation and results. Read more…

Intel® HPC + AI Pavilion

Intel Keynote Address

Intel is the foundation of HPC – from the workstation to the cloud to the backbone of the Top500. At SC20, Intel’s Trish Damkroger, VP and GM of high performance computing, addresses the audience to show how Intel and its partners are building the future of HPC today, through hardware and software technologies that accelerate the broad deployment of advanced HPC systems. Read more…

Gordon Bell Prize Winner Breaks Ground in AI-Infused Ab Initio Simulation

November 19, 2020

The race to blend deep learning and first-principle simulation to speed up solutions and scale up problems tackled is one of the most exciting research areas in computational science today. This year’s ACM Gordon Bell Prize winner announced today at SC20 makes significant progress in that direction. Read more…

By John Russell

The Present and Future of AI: A Discussion with HPC Visionary Dr. Eng Lim Goh

November 27, 2020

As HPE’s chief technology officer for artificial intelligence, Dr. Eng Lim Goh devotes much of his time talking and consulting with enterprise customers about Read more…

By Todd R. Weiss

SC20 Panel – OK, You Hate Storage Tiering. What’s Next Then?

November 25, 2020

Tiering in HPC storage has a bad rep. No one likes it. It complicates things and slows I/O. At least one storage technology newcomer – VAST Data – advocates dumping the whole idea. One large-scale user, NERSC storage architect Glenn Lockwood sort of agrees. The challenge, of course, is that tiering... Read more…

By John Russell

Exscalate4CoV Runs 70 Billion-Molecule Coronavirus Simulation

November 25, 2020

The winds of the pandemic are changing – for better and for worse. Three viable vaccines now teeter on the brink of regulatory approval, which will pave the way for broad distribution by April or May. But until then, COVID-19 cases are skyrocketing across the U.S. and Europe... Read more…

By Oliver Peckham

Azure Scaled to Record 86,400 Cores for Molecular Dynamics

November 20, 2020

A new record for HPC scaling on the public cloud has been achieved on Microsoft Azure. Led by Dr. Jer-Ming Chia, the cloud provider partnered with the Beckman I Read more…

By Oliver Peckham

Gordon Bell Special Prize Goes to Massive SARS-CoV-2 Simulations

November 19, 2020

2020 has proven a harrowing year – but it has produced remarkable heroes. To that end, this year, the Association for Computing Machinery (ACM) introduced the Read more…

By Oliver Peckham

Gordon Bell Prize Winner Breaks Ground in AI-Infused Ab Initio Simulation

November 19, 2020

The race to blend deep learning and first-principle simulation to speed up solutions and scale up problems tackled is one of the most exciting research areas in computational science today. This year’s ACM Gordon Bell Prize winner announced today at SC20 makes significant progress in that direction. Read more…

By John Russell

SC20 Keynote: Climate, Exascale & the Ultimate Answer

November 19, 2020

SC20’s keynote was delivered by renowned meteorologist and climatologist Bjorn Stevens, a director at the Max Planck Institute for Meteorology since 2008 and a professor at the University of Hamburg. In his keynote, Stevens traced the history of climate science from its earliest days through... Read more…

By Oliver Peckham

EuroHPC Exec. Dir. Talks Procurement, EPI, and Europe’s Efforts to Control its HPC Destiny

November 19, 2020

While much of the HPC community’s attention is fixed on SC20’s flood of news and new product announcements, Anders Dam Jensen, the newly-minted executive di Read more…

By Steve Conway

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms

July 28, 2020

Doctors and medical researchers have struggled to pinpoint – let alone explain – the deluge of symptoms induced by COVID-19 infections in patients, and what Read more…

By Oliver Peckham

Azure Scaled to Record 86,400 Cores for Molecular Dynamics

November 20, 2020

A new record for HPC scaling on the public cloud has been achieved on Microsoft Azure. Led by Dr. Jer-Ming Chia, the cloud provider partnered with the Beckman I Read more…

By Oliver Peckham

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

NICS Unleashes ‘Kraken’ Supercomputer

April 4, 2008

A Cray XT4 supercomputer, dubbed Kraken, is scheduled to come online in mid-summer at the National Institute for Computational Sciences (NICS). The soon-to-be petascale system, and the resulting NICS organization, are the result of an NSF Track II award of $65 million to the University of Tennessee and its partners to provide next-generation supercomputing for the nation's science community. Read more…

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Leading Solution Providers

Contributors

Aurora’s Troubles Move Frontier into Pole Exascale Position

October 1, 2020

Intel’s 7nm node delay has raised questions about the status of the Aurora supercomputer that was scheduled to be stood up at Argonne National Laboratory next year. Aurora was in the running to be the United States’ first exascale supercomputer although it was on a contemporaneous timeline with... Read more…

By Tiffany Trader

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

At Oak Ridge, ‘End of Life’ Sometimes Isn’t

October 31, 2020

Sometimes, the old dog actually does go live on a farm. HPC systems are often cursed with short lifespans, as they are continually supplanted by the latest and Read more…

By Oliver Peckham

Texas A&M Announces Flagship ‘Grace’ Supercomputer

November 9, 2020

Texas A&M University has announced its next flagship system: Grace. The new supercomputer, named for legendary programming pioneer Grace Hopper, is replacing the Ada system (itself named for mathematician Ada Lovelace) as the primary workhorse for Texas A&M’s High Performance Research Computing (HPRC). Read more…

By Oliver Peckham

Top500: Fugaku Keeps Crown, Nvidia’s Selene Climbs to #5

November 16, 2020

With the publication of the 56th Top500 list today from SC20's virtual proceedings, Japan's Fugaku supercomputer – now fully deployed – notches another win, Read more…

By Tiffany Trader

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

Microsoft Azure Adds A100 GPU Instances for ‘Supercomputer-Class AI’ in the Cloud

August 19, 2020

Microsoft Azure continues to infuse its cloud platform with HPC- and AI-directed technologies. Today the cloud services purveyor announced a new virtual machine Read more…

By Tiffany Trader

Nvidia-Arm Deal a Boon for RISC-V?

October 26, 2020

The $40 billion blockbuster acquisition deal that will bring chipmaker Arm into the Nvidia corporate family could provide a boost for the competing RISC-V architecture. As regulators in the U.S., China and the European Union begin scrutinizing the impact of the blockbuster deal on semiconductor industry competition and innovation, the deal has at the very least... Read more…

By George Leopold

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This