Oil and Gas Supercloud Clears Out Remaining Knights Landing Inventory: All 38,000 Wafers

By Tiffany Trader

March 13, 2019

The McCloud HPC service being built by Australia’s DownUnder GeoSolutions (DUG) outside Houston is set to become the largest oil and gas cloud in the world this year, providing 250 single-precision petaflops for DUG’s geoscience services business and its expanding HPC-as-a-service client roster. Located on a 20-acre datacenter campus in Katy, Texas, the liquid-cooled infrastructure will comprise the largest installation of Intel Knights Landing (KNL) nodes in the world.

If you’d like to follow suit with your own KNL cluster and you don’t have the hardware already, you’re out of luck because not only has the product been discontinued (you knew this), but DUG has cleared out all the remaining inventory, snagging 38,000 wafers. We hear DUG similarly took care of Intel’s leftover Knights Corner inventory back in 2014 (and those cards are still going strong processing DUG’s workloads).

At the very well-attended Rice Oil & Gas conference in Houston last week, we spoke with Phil Schwan, CTO for DUG, who also delivered a presentation at the event. We chatted about DUG’s success with Phi, their passion for immersion cooling, and some of the interesting decisions that went into the new facility, like the choice to run at 240 volts, as well as McCloud’s custom network design.

DUG started off in oil services, in quantitative interpretation, before getting into processing and imaging, which has been the company’s bread and butter for over a decade, but Schwan emphasized DUG is first and foremost an HPC company. “That’s been our real focus in how we set ourselves apart – we have terrific geoscientists, but they are empowered to such a large degree by the hardware and the software,” he shared.

“Bruce,” DUG’s Perth cluster, comprised of KNL nodes, totaling 20 single-precision petaflops. The “Bubba” tanks currently being installed at the Houston Skybox facility will look similar to these. Photo provided by DUG.

DUG currently enjoys somewhere in the neighborhood of 50 aggregate (single-precision) petaflops spread across the world (the company has processing centers in Perth, London, Kuala Lumpur, and Houston) but it is continually hitting its head on this ceiling. At the Skybox Datacenters campus, located in Katy, Texas, eight miles east of the company’s U.S. headquarters in Houston, DUG will not only be adding to its internal resources for its geoscience services business, it will be priming the pump (significantly so) for its HPC as a Service business that it unveiled at SEG last year.

“Up until now it’s been a purely service business – processing, imaging, FWI, and so on, but as soon as Skybox opens in early Q2, we’ll have a lot more cycles to sell to third parties – and we have a few of those clients already beta testing the service both in Australia and here in the Americas.”

To meet that demand, DUG has ordered the remaining Phi Knights Landing inventory from Intel, all 38,000 wafers. Once dies are cut and rolled into servers, the nodes will be combined with an infusion of KNLs transferred from DUG’s other Houston site (the Houston capacity is collectively referred to as “Bubba”) to provide around 40,000 total nodes with a peak output of about 250 (single precision) petaflops.

Schwan describes why DUG is so partial to the Phi products (the company is almost certainly Intel’s largest customer of this line):

“There were a few reasons – number one, we came to the accelerator party fashionably late, and I think that worked well for us because if we had had to choose five years earlier, we would have chosen GPUs, and all of our codes would have gone in that direction and we’d be stuck there. Whereas our transition first to Knights Corner and then to Knights Landing – even if Intel did a bit of disservice by pretending that it’s trivial and you just recompile and run – they are so much closer to the classic x86 architectures that we are already used to that we were able to take all of our existing skill sets, our existing toolchains and so on and make incremental improvements to make it run really well on the KNLs.

“The other thing is we run a bunch of things that are not necessarily hyper-optimized for the KNL – we run a lot of Java on the KNL and it runs great. And there’s AVX512 vectorization in the JVM now as well – again if we write the code intelligently and it uses lots of threads and it’s not terrible with how it addresses memory, the KNLs for us have been a huge win.”

Memory was another plus in Phi’s column, but DUG will provide other alternatives based on price-performance advantage and customer demand. “If you just look at the price of a high-end GPU, KNL comes with 16 Gigs of on-package memory, which is huge, to get a 16 Gig GPU you are talking many multiples of the price we pay for KNLs,” he said. “So it’s a no brainer for just a bang-for-buck perspective. But at the end of the day we are not really religious about it – if something else comes along that has better TCO, then we’ll buy that instead. If we have McCloud clients as we already do who say we must have GPUs because we have this or that code that we don’t want to rewrite, then we’ll get the GPUs.”

Although the new service is named “McCloud,” Schwan himself is a little wary of the cloud terminology.

“Nowadays everybody is talking about cloud, but some people still hear cloud and they go, ‘Yuck, I don’t want to do anything with the cloud. The cloud is a pain in the ass. They don’t understand my business.’ We’ve tried to provide something that is geared for geoscience, for the market we know really well that provides as much or as little as they want to take advantage of. So it can be just hardware cycles, which is fine, but I think in a way is the less interesting part of the solution. We also provide our entire software stack for as much or as little as they want to use. Our own service business expertise, so for example if you’re a major oil company who is really excited about FWI or wanting to focus on migration, but maybe you don’t want to do all the preprocessing – you could absolutely get us to do the preprocess on DUG McCloud and then you go and do your special sauce.”

DUG CTO Phil Schwan presenting at Rice Oil and Gas conference on March 5, 2019 – click to enlarge

Speaking of special sauce, a major one for DUG is immersive cooling.

The new immersion tanks at Skybox are DUG’s 6th iteration design. The delta-T for the input water and output water is only about 4-5 degrees Celsius, and they can do that pretty much anywhere in the world, even in Perth and Houston summers, with evaporative chillers.

“If you only have to get it from 35 to 30 [degrees Celsius] that’s pretty easy to do,” said Schwan of the cooling technology. “But the design inside the room, actually getting everything lined up so you have the right amount of flow, and the right amount of pressure and you have the right valves and you make sure the tank at the very end of the row has the same cooling capacity as the tank at the very beginning of the row – all of these things just take detail and somebody who’s willing to crunch the numbers and make sure from an engineering perspective that it’s all done right. We haven’t invented immersion cooling by any means, but I think we’ve just tried to look at every element and simplify as much as possible, whether it’s something as obvious as not having lids on the tank to having a heat exchanger that’s immersed in the tank so the fluid never leaves the tank – that was, as far as I know, the first time that’s ever been done. And it just eliminates so much complexity, piping and manifolds, and pumping and computers to control all of it – we just don’t need any of that.”

DUG’s initial ~40,000-node install at Skybox requires 500,000 litres of polyalphaolefin dielectric fluid (a standard synthetic oil – it’s not 2-phase so it doesn’t evaporate, hence no need for covers) to fill all the tanks, which are being brought in this week. At this stage, all of the plumbing in the room is complete, all of the underfloor, electrical and the panels are complete, and all the raised flooring is in. Outside they’ve started to put all the pumps down, started to put chillers in place, as well as the big switch gear, the big transformers. Energizing the facility is slated for the second half of April.

Inside Skybox data hall 2, future home of DUG’s ~40,000 KNL nodes. 5,000 1,000-litre IBCs will be brought in to fill all the tanks with coolant. (Photo provided by DUG.)

Bringing Australian voltage to the U.S.

DUG’s facility at Skybox will run at 240 volts. In the United States, nearly all datacenters run at 110 volts, of course. So how did this happen? It typically takes two transformations to get down to 110 volts from medium voltage, and each one of those transforms costs you about 5 percent, Schwan explained in his presentation. They were not willing to give up that 5 percent.

“We decided to run at more typical Australian voltage – 240 volts – and we do it with a single transform. None of the sparkies wanted to deal with it here,” said Schwan. “We had to work very closely with Skybox and with the utility to make this happen but we were able to make it happen.”

Photo showing 1 MW panel boards, presented by Phil Schwan at Rice Oil and Gas conference on March 5, 2019

The benefits didn’t stop with the efficiencies gained by cutting out some of the transforms. “Because it’s 240 volt, we’re obviously running with lower current, and this means we can get away with fewer circuit boards, fewer circuit breakers, fewer PDUs, fewer everything. Each one of these boards is a 1 MW panel board – it operates an entire row of tanks – and we run 70 amp 3-phase all the way to the tank, which means we get away with about half as many of these as we otherwise would have. We were able to use this otherwise dead space – in a room this size, we have these posts to hold the ceiling up and by being able to fit a minimum number of panel boards in this otherwise dead space, we can fit an extra tank in the room – that’s an extra 2.5 percent worth of gear.”

I asked Schwan if he thinks other American datacenters should be following suit and where the tipping point is to gain an advantage by switching to 240 volt. “I think that most people doing HPC at the scale of the oil and gas industry are big enough to benefit from it,” he said. “But I think there are bigger wins even before that. Like immersion, I don’t know how anybody doing HPC at scale doesn’t see something like our immersion solution and not have it become a personal religion. The savings are just so compelling, the economics are just overwhelming.”

Outside the Box

Diagram of 2RU chassis with Mellanox networking schema

Schwan also had some interesting things to report on DUG’s other hardware choices for its Skybox deployment, noting that while they’ve always done standard 10 Gigabit Ethernet, they went a different direction with this new facility, working closely with Mellanox. “We had everybody in the universe wanting to make a bid for that network and some very interesting proposals came out – but at the end of the day, the Mellanox solution really was a very outside of the box solution and provided some amazing advantages,” he said.

The technology, created exclusively for DUG, allows four servers to use a single 50 Gb/s network connection, with each node able to burst to over 30 Gb/s. The design relies on Mellanox’s SN2700 32 port 100 Gb/s Ethernet switches.

In each four-node chassis, DUG deployed a multi host network adapter with a NIC in node number one and the three other boards are connected by PCIe. Schwan said this delivered a slew of benefits. The most obvious is since they only had one external connection coming into every chassis that resulted in a quarter as many cables, a savings of ~30,000 cables. They were also able to get 200 nodes on a single switch, which reduced by about half the total number of switches in the entire room.

“But it also gives us some things that we weren’t really expecting,” Schwan said, “for example the standard networks we operate, they are all 10 Gigabit Ethernet and every node stands on its own so they can peak at 10 Gb/s. This one is a single 50 Gb/s connection coming into the node and any node on its own can burst at 30 Gb/s – and of course can do 12.5 Gb/s each, but especially within this four-node chassis, we have very low latency and extremely high bandwidth, which we take advantage of in applications.

“The other thing worth emphasizing is none of that bridging in the master node goes through the OS it’s all handled by the card, so it’s not adding any extra load to that master node. The master node doesn’t even have to be on; as long as its plugged into the chassis it gets the power that it needs to power that master node and do the bridging. We haven’t created any new single points of failure in that chassis by having this single NIC.”

With all of these combined efficiencies, DUG’s datacenter achieves a power usage effectiveness (PUE) under 1.05.

We’re Number One?

We’ve previously reported that despite being on the bleeding edge of industrial compute capacity, DUG does not have Top500 aspirations. That has not changed. “Our application demands are nothing like what the Top500 demands and there is virtually no value for me in putting in an astronomically low latency interconnect, which is what I would need to do to do Top500 effectively on that number of processors,” said Schwan.

Skybox Houston blueprint. DUG CTO Phil Schwan presenting at Rice Oil and Gas conference on March 5, 2019 – click to enlarge

DUG has big plans to expand into the massive Skybox campus and talks confidently about reaching exascale.

“We’re going to put 40,000 compute nodes in this blue area, we have the data hall next door the same size and ready for us to start building as soon as this one’s finished, and then what you can’t see off the top of this diagram past the service yard, is 10 acres of land that we have architectural plans to build on when these two are full, so by the time this is done, this should easily be a multi-exaflop campus of compute centers,” Schwan said in his presentation, referencing his slide (see photo at right).

The data hall with all the small blue rectangles (each representing one tank) will draw about 15 MW. This comes out to about 8.5 kilowatts per square meter or about about 800 kilowatts per square foot. This is about three to four times the density of what you’d find in a typical, still fairly high density colo facility, by Schwan’s account.

Given how beneficial Phi has been for DUG, now that Phi is gone, what will they buy next? Schwan, who maintains a spreadsheet evaluating various processors’ TCO, likes the Intel Xeon roadmap, which he sees as having all of the best features of Phi.

“If you look at the roadmap for Xeons going forward, it’s got AVX512, it’s got 40, 50, 60 cores, it’s got high-bandwidth memory as an option on package – well what is that? It’s a Xeon Phi. Assuming the TCO continues to be as excellent as it has been from Intel, I think the classic Xeon line will be a very natural fit for us. They almost have the accelerators built onto the chips now – but every six months, we look at it again, to see what’s interesting.”

Schwan reported that DUG has committed customers but it is not ready to announce them yet. “We’re not ready to make a big splash because I don’t have anything to sell them yet, that will be in Q2.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

HPE to Acquire Cray for $1.3B

May 17, 2019

Venerable supercomputer pioneer Cray Inc. will be acquired by Hewlett Packard Enterprise for $1.3 billion under a definitive agreement announced this morning. The news follows HPE’s acquisition nearly three years ago o Read more…

By Doug Black & Tiffany Trader

China Establishes Seventh National Supercomputing Center

May 16, 2019

Chinese media is reporting that China will construct a new National Supercomputer Center in Zhengzhou, in central China's Henan Province. The new Zhengzhou facility will house a 100-petaflops supercomputer and will be ta Read more…

By Staff report

Interview with 2019 Person to Watch Ken King

May 16, 2019

Today, as the final installment of our HPCwire People to Watch focus series, we present our interview with Ken King, general manager of OpenPOWER for the IBM Systems Group. Ken is responsible for building and managing t Read more…

By HPCwire Editorial Team

HPE Extreme Performance Solutions

HPE and Intel® Omni-Path Architecture: How to Power a Cloud

Learn how HPE and Intel® Omni-Path Architecture provide critical infrastructure for leading Nordic HPC provider’s HPCFLOW cloud service.

For decades, HPE has been at the forefront of high-performance computing, and we’ve powered some of the fastest and most robust supercomputers in the world. Read more…

IBM Accelerated Insights

Autonomous Vehicles: New challenges for the CAE Data Center

Managing infrastructure complexity in the age of AI

When most of us hear the term autonomous vehicles, we conjure up images of driverless Waymos or robotic transport trucks driving long-haul highway routes. Read more…

What’s New in HPC Research: Image Classification, Crowd Computing, Genome Informatics & More

May 15, 2019

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

HPE to Acquire Cray for $1.3B

May 17, 2019

Venerable supercomputer pioneer Cray Inc. will be acquired by Hewlett Packard Enterprise for $1.3 billion under a definitive agreement announced this morning. T Read more…

By Doug Black & Tiffany Trader

Deep Learning Competitors Stalk Nvidia

May 14, 2019

There is no shortage of processing architectures emerging to accelerate deep learning workloads, with two more options emerging this week to challenge GPU leader Nvidia. First, Intel researchers claimed a new deep learning record for image classification on the ResNet-50 convolutional neural network. Separately, Israeli AI chip startup Hailo.ai... Read more…

By George Leopold

CCC Offers Draft 20-Year AI Roadmap; Seeks Comments

May 14, 2019

Artificial Intelligence in all its guises has captured much of the conversation in HPC and general computing today. The White House, DARPA, IARPA, and Departmen Read more…

By John Russell

Cascade Lake Shows Up to 84 Percent Gen-on-Gen Advantage on STAC Benchmarking

May 13, 2019

The Securities Technology Analysis Center (STAC) issued a report Friday comparing the performance of Intel's Cascade Lake processors with previous-gen Skylake u Read more…

By Tiffany Trader

Nvidia Claims 6000x Speed-Up for Stock Trading Backtest Benchmark

May 13, 2019

A stock trading backtesting algorithm used by hedge funds to simulate trading variants has received a massive, GPU-based performance boost, according to Nvidia, Read more…

By Doug Black

ASC19: NTHU Returns to Glory

May 11, 2019

As many of you Student Cluster Competition fanatics know by now, Taiwan’s National Tsing Hua University (NTHU) won the gold medal at the recently concluded AS Read more…

By Dan Olds

Intel 7nm GPU on Roadmap for 2021, OneAPI Coming This Year

May 8, 2019

At Intel's investor meeting today in Santa Clara, Calif., the company filled in details of its roadmap and product launch plans and sought to allay concerns about delays of its 10nm chips. In laying out its 10nm and 7nm timelines, Intel revealed that its first 7nm product would be... Read more…

By Tiffany Trader

Ten Great Reasons to Build the 1.5 Exaflops Frontier

May 7, 2019

It’s perhaps obvious that the fundamental reason for building expensive exascale computers is to drive science and industry forward, realizing the resulting b Read more…

By John Russell

Cray, AMD to Extend DOE’s Exascale Frontier

May 7, 2019

Cray and AMD are coming back to Oak Ridge National Laboratory to partner on the world’s largest and most expensive supercomputer. The Department of Energy’s Read more…

By Tiffany Trader

Graphene Surprises Again, This Time for Quantum Computing

May 8, 2019

Graphene is fascinating stuff with promise for use in a seeming endless number of applications. This month researchers from the University of Vienna and Institu Read more…

By John Russell

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

Deep Learning Competitors Stalk Nvidia

May 14, 2019

There is no shortage of processing architectures emerging to accelerate deep learning workloads, with two more options emerging this week to challenge GPU leader Nvidia. First, Intel researchers claimed a new deep learning record for image classification on the ResNet-50 convolutional neural network. Separately, Israeli AI chip startup Hailo.ai... Read more…

By George Leopold

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized si Read more…

By Tiffany Trader

Intel Launches Cascade Lake Xeons with Up to 56 Cores

April 2, 2019

At Intel's Data-Centric Innovation Day in San Francisco (April 2), the company unveiled its second-generation Xeon Scalable (Cascade Lake) family and debuted it Read more…

By Tiffany Trader

France to Deploy AI-Focused Supercomputer: Jean Zay

January 22, 2019

HPE announced today that it won the contract to build a supercomputer that will drive France’s AI and HPC efforts. The computer will be part of GENCI, the Fre Read more…

By Tiffany Trader

In Wake of Nvidia-Mellanox: Xilinx to Acquire Solarflare

April 25, 2019

With echoes of Nvidia’s recent acquisition of Mellanox, FPGA maker Xilinx has announced a definitive agreement to acquire Solarflare Communications, provider Read more…

By Doug Black

Nvidia Claims 6000x Speed-Up for Stock Trading Backtest Benchmark

May 13, 2019

A stock trading backtesting algorithm used by hedge funds to simulate trading variants has received a massive, GPU-based performance boost, according to Nvidia, Read more…

By Doug Black

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This