PRACEdays 2017 Wraps Up in Barcelona

By Kim McMahon

May 18, 2017

Guest contributor Kim McMahon shares highlights from the final day of the PRACEdays 2017 conference in Barcelona.

Barcelona has been absolutely lovely; the weather, the food, the people. I am, sadly, finishing my last day at PRACEdays 2017 with two sessions: an in-depth look at the oil and gas industry’s use of HPC and a panel discussion on bridging the gap between scientific code development and exascale technology.

Henri Calandra of Total SA spoke on the challenges of increased HPC complexity and value delivery for the oil and gas industry.

The main challenge Total and other oil and gas companies are finding is that discoveries of oil deposits are becoming more rare. To stay competitive, they need to first and foremost open new frontiers for oil discovery, but do this while reducing risk and costs.

In the 1980s, seismic data was reviewed in the 2 dimensional space. The 1990’s started development of 3D seismic depth imaging. Continuing into the 2000’s, 3D depth imaging was improved as wave equations were added to the traditional imaging. The 2010’s brought more physics, more accurate images, and more complex processes to visually view the seismic data.

Henri Calandra of Total SA – click to enlarge

The industry continues to see drastic improvements. A seismic simulation that in 2010 took four weeks to run, in 2016 takes one day. Images have significantly higher resolution and the amount of detail seen in the images enables Total to be more precise in identifying seismic fields and potential hazards in drilling.

If you look closely at the pictures (shown on the slide), you can make out improvements the image quality. Although it may seem slight to our eye, the geoscientist can see the small nuances in the images that help them be more precise, identify hazards, and achieve a better positive acquisition rate.

How did this change over the last 30+ years happen? Improved technology, integrating more advanced technologies, improved processes, more physics, more complex algorithms – basically more HPC.

Using HPC, Total has been able to reduce their risks, become more precise and selective on their explorations, identify potential oil fields faster, and optimize their seismic depth imaging.

What’s next: Opening new frontiers enabled by the better appraisal of potential new opportunities. HPC has enabled seismic depth imaging methods that can do more iterations, more physics, and more complex approximations. Models are larger, there are multiple resolutions, and 4D data. There is interactive processing happening during the drilling and these multi real-time simulations allow adjustments to the drilling, thus improving the success rate of finding oil.

Developing new algorithms is a long-term process and typically last across several generations of supercomputers. Of course, the oil and gas industry is looking forward to exascale. But the future is complex — in the compute in the form of manycore, with accelerators, and heterogeneous systems. Complexity in the storage with the abundance of data and movement between tiers of storage via multiple storage technologies. Complexity in the tools such as OpenCL, CUDA, OpenMP, OpenACC, and compilers. There is a need for standardized tools to hide the hardware complexity and help the users of the HPC systems.

None of this can be addressed without HPC specialists. Application development cannot be done without a strong collaboration between the physicist, scientist, and HPC team. This constant progress will continue to improve the predictions Total relies on for finding productive oil fields.

The second session of the day was a panel moderated by Inma Martinez: titled “Bridging the gap between scientific code development and exascale technology.” Much of the focus was on the software challenges for extreme scale computing faced by the community.

The panelists:

Henri Calandra: Total

Lee Margetts: NAFEMS

Erik Lindahl: PRACE Scientific Steering Committee

Frauke Gräter: Heidelberg Institute for Theoretical Studies

Thomas Skordas, European Commission

This highly anticipated session looked at the gap between hardware, software, and application advances and the role of industry, academia and the European Commission in the development of software for HPC systems.

Thomas Skordas pointed out that driving leadership in exascale is important and it’s about much more than hardware. It’s the next generation code, training, and understanding the opportunities exascale can accomplish.

Frauke Gräter sees data as a significant challenge; the accumulation of more and more data and the analysis of that data. In the end, scientists are looking for insights and research organizations will invest in science.

Parallelizing the algorithms is the key action according to Erik Lindahl. There is too much focus on the exascale machine but algorithms need to be good to make the best use of the hardware. Exascale, expected to happen around 2020, is not expected to be a staple in commercial datacenter until 2035. There is not a supercomputer in the world that does not run open source software, and exascale machines will follow this practice.

Lee Margetts talked of “monster machines” — the large compute clusters in every datacenter. As large vendors adopt artificial intelligence and machine learning, will we see the end of the road for the large “monster” machines? We have very sophisticated algorithms and are using very sophisticated computing. What if this technology that is used in something like oil and gas were used to predict volcanoes or earthquakes — the point being, can technologies be used for more than one science?

Henri Calandra noted that data analytics and storage will become a huge issue. If we move to exascale, we’ll have to deal with thousands of compute nodes and update code for all these machines.

The biggest challenge is the software challenge.

When asked about the new science we will see, the panel had answers that fit their sphere of knowledge. Thomas spoke of brain modeling and self-driving cars. Frauke added genome assembly and new scientific disciplines such as personalized medicine. She says, “To attract young people, we need to marry machine learning and deep learning into HPC.” Erik notes that we have a revolution of data because of accelerators. Data and accelerators enabling genome resource will drive research in this area. Lee spoke of integrating machine learning into manufacturing processes.

Kim McMahon, XAND McMahon

As Lee said, “Diversity in funding through the European commission is really important – we need to fund the mavericks as well as the crazy ones.”

My takeaway is that the accomplishment of an exascale machine is not the goal that will drive the technology forward. It’s the analysis of the data. The algorithms. Parallelizing code. There will be some who will buy the exascale machine, but it will be years after it’s available before it’s broadly accepted. As Lee said, “the focus is not the machine, the algorithms or the software, but delivering on the science. Most people in HPC are domain scientists who are trying to solve a problem.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Nvidia Debuts Turing Architecture, Focusing on Real-Time Ray Tracing

August 16, 2018

From the SIGGRAPH professional graphics conference in Vancouver this week, Nvidia CEO Jensen Huang unveiled Turing, the company's next-gen GPU platform that introduces new RT Cores to accelerate ray tracing and new Tenso Read more…

By Tiffany Trader

HPC Coding: The Power of L(o)osing Control

August 16, 2018

Exascale roadmaps, exascale projects and exascale lobbyists ask, on-again-off-again, for a fundamental rewrite of major code building blocks. Otherwise, so they claim, codes will not scale up. Naturally, some exascale pr Read more…

By Tobias Weinzierl

STAQ(ing) the Quantum Computing Deck

August 16, 2018

Quantum computers – at least for now – remain noisy. That’s another way of saying unreliable and in diverse ways that often depend on the specific quantum technology used. One idea is to mitigate noisiness and perh Read more…

By John Russell

HPE Extreme Performance Solutions

Introducing the First Integrated System Management Software for HPC Clusters from HPE

How do you manage your complex, growing cluster environments? Answer that big challenge with the new HPC cluster management solution: HPE Performance Cluster Manager. Read more…

IBM Accelerated Insights

Super Problem Solving

You might think that tackling the world’s toughest problems is a job only for superheroes, but at special places such as the Oak Ridge National Laboratory, supercomputers are the real heroes. Read more…

NREL ‘Eagle’ Supercomputer to Advance Energy Tech R&D

August 14, 2018

The U.S. Department of Energy (DOE) National Renewable Energy Laboratory (NREL) has contracted with Hewlett Packard Enterprise (HPE) for a new 8-petaflops (peak) supercomputer that will be used to advance early-stage R&a Read more…

By Tiffany Trader

STAQ(ing) the Quantum Computing Deck

August 16, 2018

Quantum computers – at least for now – remain noisy. That’s another way of saying unreliable and in diverse ways that often depend on the specific quantum Read more…

By John Russell

NREL ‘Eagle’ Supercomputer to Advance Energy Tech R&D

August 14, 2018

The U.S. Department of Energy (DOE) National Renewable Energy Laboratory (NREL) has contracted with Hewlett Packard Enterprise (HPE) for a new 8-petaflops (peak Read more…

By Tiffany Trader

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

SLATE Update: Making Math Libraries Exascale-ready

August 9, 2018

Practically-speaking, achieving exascale computing requires enabling HPC software to effectively use accelerators – mostly GPUs at present – and that remain Read more…

By John Russell

Summertime in Washington: Some Unexpected Advanced Computing News

August 8, 2018

Summertime in Washington DC is known for its heat and humidity. That is why most people get away to either the mountains or the seashore and things slow down. H Read more…

By Alex R. Larzelere

NSF Invests $15 Million in Quantum STAQ

August 7, 2018

Quantum computing development is in full ascent as global backers aim to transcend the limitations of classical computing by leveraging the magical-seeming prop Read more…

By Tiffany Trader

By the Numbers: Cray Would Like Exascale to Be the Icing on the Cake

August 1, 2018

On its earnings call held for investors yesterday, Cray gave an accounting for its latest quarterly financials, offered future guidance and provided an update o Read more…

By Tiffany Trader

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17

Altair

AMD @ SC17

AMD

ASRock Rack @ SC17

ASRock Rack

CEJN @ SC17

CEJN

DDN Storage @ SC17

DDN Storage

Huawei @ SC17

Huawei

IBM @ SC17

IBM

IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17

Intel

Lenovo @ SC17

Lenovo

Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17

Microsoft

Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17

Supericro

Tyan @ SC17

Tyan

Univa @ SC17

Univa

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This