Alternative Supercomputing or How to Misuse a Computer

By Tiffany Trader

July 14, 2016

In 2008, the IBM Roadrunner supercomputer broke the petaflops barrier using the power of the heterogeneous Sony Cell Broadband Engine (BE) processor. A year prior, the Cell BE had already made its way into the consumer market as the engine inside the SonyPlaystation 3. The PS3’s accelerated design, Linux-capability and low price point inspired several organizations, including the United States Air Force Research Laboratory, to build Linux clusters out of the gaming machines. While the 2010 AFRL “Condor” deployment was the largest of these efforts, stringing together 1,760 Sony PlayStation 3 boxes for an estimated 500 teraflops of performance, the PS3 cluster effort begun at the University of Massachusetts Dartmouth in 2007, is likely the longest-running.

Gaurav Khanna, a professor in the physics department at UMass Dartmouth, built his first Cell-based cluster with eight PS3s in 2007. The “PS3 Gravity Grid” was used to perform research grade simulations of black hole systems and was the first cluster to generate published scientific results. With support from Sony, the cluster was later expanded to 16 PS3s. In 2014, the Dartmouth team, under the direction of Khanna, created yet another cluster out of 308 Sony PS3 gaming consoles and a refrigerated shipping container using hardware donated from the AFRL effort. Khanna reports that these PS3s are still being used and are delivering performance and performance-per-watt on par with unaccelerated Xeon boxes. UMass Dartmouth is a net energy producer, which makes using this older silicon more feasible than it otherwise might be. More on this to follow.

gkhanna headshot
Gaurav Khanna

After Sony locked down the PS3 OS in 2010 in response to a hacking incident, much of the enthusiasm for the gaming-based clusters fell by the wayside, but Khanna still champions using low-cost consumer (and now mobile) hardware for scientific computing.

Recently the professor began looking into other technologies in the same spirit of the PlayStation 3, which aside from providing efficient number-crunching had the economics of being a mass-manufactured consumer device with a heavily discounted pricing model. On account of competition with Microsoft, Sony was selling the PS3s for about half what it cost to make them.

“I’m interested in a cheap device that’s mass manufactured that’s reliable and is high-performance,” says Khanna, “And I think that naturally brings you to two things: video gaming cards like NVIDIA GeForce or the AMD Radeons and mobile chips such as ARM.”

Khanna studied the SBC (single board computing) space including the Raspberry Pi for a suitable platform. “While it’s true that the Pis sip power,” he said, “being in the few hundred megaflops range each, you would have to have so many of them with so many power supplies, cables, network cards and switches to get some substantial performance that it’s just not worth the hassle.”

A couple years ago he began experimenting with using AMD Radeon cards to crunch some of his astrophysics codes. With the assistance of students, he had already created OpenCL versions and CUDA versions of his codes, and the Radeon of course supports OpenCL. He reports being impressed by the performance and reliability of the cards. Of the roughly two dozen cards crunching scientific work for the last two years, basically non-stop 24/7, there’s only been two or three that failed, he says. The latest version cards he’s acquired are the Radeon R9 Fury X, which provide 8.6 teraflops of single-precision floating point computing power and 512 GB/s of memory bandwidth for about $460.

Khanna, a theoretical physicist cum computational scientist who studies the internal workings of black holes and uses theory and computer simulations to predict gravitational wave radiation, says it’s his research into understanding black holes that has benefited most from his use of consumer-class silicon.

“I’m very interested in what happens inside a black hole. There has been a fair amount of work on that over the decades, but a lot of questions still remain on what actually happens if you’re inside a black hole and what kind of effects you could expect to observe and expect to feel and whether there’s a singularity event that happens,” says Khanna. “Those are the kind of codes that I mapped onto the PS3 architecture.”

He explains that because his original code was optimized for the Cell, the move to GPUs was straightforward. “The break up [in the code] was similar,” he says. “You’ve got the part that needs the I/O done, the CPU, and you’ve got the parallel part that’s going to be done on the GPU, or the Synergistic Processing Elements (SPEs) in the case of the Cell.”

The codes are well-suited for a FP32-dominant processor although parts do need FP64, so Khanna employs mixed precision. At a one-sixteenth ratio, the AMD R9 Fury X card has only about a half teraflops of spec’d FP64 performance, but Khanna says this is sufficient for his needs.

“From a cost perspective, I think the AMD cards are a no brainer – for under $500, you get a few teraflops of performance on this consumer device – while the high-end NVIDIA Tesla products [specifically targeted at HPC] go for a few thousand each,” he says.

Budget realities are what got Khanna started on the path to misusing compute for science as he puts it. “Theoretical physics is an esoteric science without direct implication on high-priority areas like public health, energy and so forth, so has been an underfunded area,” he says. “We do the most with the resources we have – and that has been a primary driver for pretty much my entire career — to find creative ways to do what we need to do but do it cheaply.”

“I think the main reason why more people haven’t leveraged these AMD Radeon cards for compute is because people are so comfortable with CUDA,” Khanna opines, “but once you have CUDA version it’s not that difficult to develop an OpenCL version. It’s a bit more complex, but if you have a CUDA version I would say you’re most of the way there already. Then there’s also the new tools from AMD that help you switch back and forth.”

UMass Dartmouth Elroy2While he was experimenting with the Radeon gaming boards, Khanna also wanted to implement a cluster with a mobile platform. He was looking for something sufficiently powerful yet energy-efficient with support for either CUDA or OpenCL. “If you keep those constraints in line, you find there are really two nice viable platforms – one is the NVIDIA Tegra, which of course supports CUDA, and the other is ODROID boards, developed by Hardkernel, a South Korean purveyor of open-source hardware. The boards use Samsung processors and an ARM Mali GPU that supports OpenCL.”

Khanna ended up going with the Tegra X1 series SoC from NVIDIA, in part because he had several colleagues whose codes were better suited for the CUDA framework. He was also impressed with a stated peak performance (single-precision) of 512 gigaflops per card.

In May, UMass Dartmouth’s Center for Scientific Computing & Visualization Research (CSCVR) purchased 32 of these cards at roughly a 50 percent discount from NVIDIA. The total performance of the new cluster, dubbed “Elroy,” is a little over 16 teraflops and it draws only 300 watts of power.

“In terms of power efficiency, the spec’d numbers turn out to about 50 gigaflops per watt,” he says, sticking with the FP32 metrics. “If you compare with a traditional cluster, it would be a few gigaflops per watt for a CPU-only architecture. A cluster with GPUs consumes about 10-15 gigaflops per watt. So we’re talking up to five times more performance-per-watt than your typical GPU-accelerated computer, which is what I was hoping for. In mobile devices, people want longer battery life so a lot of innovation is going into performance-per-watt.”

At this time, Khanna has his codes up and running on Elroy and has completed some benchmarking studies, which he and a colleague detail in their new paper, Scientific Computing Using Consumer Video Gaming Hardware Devices. (The paper has yet to be published but a pre-print copy is available here.)

The move to the Tegra-X1 platform was an easier transition than the Cell, reports Khanna, because, well, Fortran. “Even though the Cell could run Fortran, and you could use Fortran to run the code on the accelerator cores, there was no bridging possibility,” he explains. “To bring the communication between the two, you had to go to C. So I actually had to write this fairly funky C-based bridge code just to be able to have the two devices communicate in flight. It was ugly.”

“A student that was working on the GPU port rewrote the entire code in C,” he says. “So now we’ve had for a few years a C/C++ code that works much better than this old Fortran code bridged with C, which was kind of a mess.”

Although he’s very pleased with the performance and energy-efficiency metrics of the Tegra-based “Elroy,” Khanna doesn’t think the mobile device experiment, as he refers to it, was that advantageous from a cost perspective. Although he received a nice discount, the boards have a full sticker price of around $600 each while the ODROID boards are $60 and offer about one-fifth the FP32 performance, so potentially a 2X performance per dollar savings. Of course, peak floating point performance does not tell the whole story, but Khanna is optimistic about the ODROID prospects.

“I think if we had done the ODROIDs instead, that would have been more attractive from a cost perspective, and in fact I think we are going to build a cluster with those ODROID Samsung boards as well for comparison’s sake,” he shares.

Khanna maintains that with the right components he can achieve a factor of five or better on performance per watt and performance per dollar over more traditional server silicon. “All we’re doing is misusing these platforms to do constructive science,” he says.

Free Power, Free Space

One reason Khanna has been able to hold on to older architectures rather than having to rip and replace is due to the rather unique power and datacenter situation at UMass Dartmouth. Being on the south coast of Massachusetts, UMass Dartmouth benefits from ample wind power and a natural gas co-generation facility. The campus recently became power sufficient but cannot sell power back to the grid – which means it actually generates an excess of power.

“So power actually is free on our campus and we are lucky in that way,” Khanna acknowledges. “Cooling ties into that equation as well since there is sufficient energy for the task.”

As for space, virtualization has freed up the IT footprint in the datacenter. It used to occupy an entire datacenter, but now the university’s student services now runs off a very small virtual cluster. “Our datacenter has slowly transformed itself into a research computing datacenter and that’s where all my hardware goes,” adds Khanna. “As they free up space, I get a chance to access it.”

The Power of Experimentation

A constrained budget wasn’t the only thing motivating Khanna to pursue alternative supercomputing platforms; he was also driven by a strong belief in the benefits of local compute resources. He says that the original PS3 cluster effort took place at a time when Teragrid (the precursor to XSEDE) was experiencing large fluctuations in terms of supply and demand. “Demand was far outpacing supply at the time,” he reports. “When you did have time, and you submitted a job, there were long wait times. It was getting to the point that jobs took longer in the queue than they took to run and that’s not a productive way to function – you want those times to at least be comparable if not have the queue time be less. I started to think about what is the best way to build my own cluster locally, cheaply.”

The situation improved when the NSF built Blue Waters and added additional systems to the XSEDE infrastructure. It was the GPU-heavy Keeneland project clusters at Georgia Tech that Khanna got the most mileage from. When those systems were retired in April 2015 after 5.5 years of service, Khanna was motivated to start searching for a cheap GPU or mobile board that would satisfy his need for cheap accelerated local compute.

Says Khanna, “The reliance on something like federal supercomputing sites is not a great way – at least in my experience – to be productive in the long run because it varies so much based on what’s available and what the demand is. It’s good to have local resources and that’s one of the drivers for me doing this here is to have independent resources. That has enabled me and my colleagues and me to do things we really couldn’t do before.”

It would take months to run certain models on the shared systems, according to Khanna, including wait times, and now they can do them in an hour or even a few minutes on local resources. He adds that it’s also very useful for the students to have a local machine because it encourages experimentation and skill development.

“One thing that I feel really painful about the shared federal sites is you get some time – and because your time is so budgeted – there is a disincentive to try different things, to experiment. Plus typically you don’t get what you ask for, you get usually half or three-fourths of what you wanted. You never want to be in a position in your research to not be able to just mess around. I want to see what happens if I just tweak different parameters. If you get to the point where you start thinking ‘is this run worth submitting,’ that’s where I think scientific productivity goes down,” he says.

“Often times you make discoveries in science when you have made a mistake or you were just for the sake of trying something, something bizarre and you learn something from that. Or your code crashes and you make mistakes and you learn something interesting from the forensics. If you are constantly worrying about is this going to cost me, is this worth the submission, am I going to lose too much supercomputing time for this job – I think is a detriment to doing good science. That’s where I feel that our local experimental clusters are hugely useful.

“For production level applications and code, when you’ve got something running and you want to run a thousand cases, I think it’s perfectly fine to use supercomputing time at a larger facility, but you don’t have the luxury there to just mess around and that’s where I think a lot of real science happens.”

Khanna takes the same issue with the cloud model, which has the added barrier of a pay wall. “While it’s great for production research, I feel that is discouraging for science,” he says.

The physics professor says he’s witnessed a shift toward more of a business model of accountability with shared resources and the cloud being increasingly viewed by funding bodies as a way to manage flat budgets. Khanna is doing what he can to buck this trend. “Of course all campuses are budget-constrained and are directing users toward the cloud with pay-per-use or toward shared federal resources but as much as I can push back, I will because the ability to just harmlessly experiment is so important for real science; if you omit that you lose a lot and I hope that doesn’t happen.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

At Long Last, Supercomputing Helps to Map the Poles

August 22, 2019

For years,” Paul Morin wrote, “those of us that made maps of the Poles apologized. We apologized for the blank spaces on maps, we apologized for mountains being in the wrong place... Read more…

By Oliver Peckham

Xilinx Says Its New FPGA is World’s Largest

August 21, 2019

In this age of exploding “technology disaggregation” – in which the Big Bang emanating from the Intel x86 CPU has produced significant advances in CPU chips and a raft of alternative, accelerated architectures... Read more…

By Doug Black

Supercomputers Generate Universes to Illuminate Galactic Formation

August 20, 2019

With advanced imaging and satellite technologies, it’s easier than ever to see a galaxy – but understanding how they form (a process that can take billions of years) is a different story. Now, a team of researchers f Read more…

By Oliver Peckham

AWS Solution Channel

Efficiency and Cost-Optimization for HPC Workloads – AWS Batch and Amazon EC2 Spot Instances

High Performance Computing on AWS leverages the power of cloud computing and the extreme scale it offers to achieve optimal HPC price/performance. With AWS you can right size your services to meet exactly the capacity requirements you need without having to overprovision or compromise capacity. Read more…

HPE Extreme Performance Solutions

Bring the combined power of HPC and AI to your business transformation

FPGA (Field Programmable Gate Array) acceleration cards are not new, as they’ve been commercially available since 1984. Typically, the emphasis around FPGAs has centered on the fact that they’re programmable accelerators, and that they can truly offer workload specific hardware acceleration solutions without requiring custom silicon. Read more…

IBM Accelerated Insights

Keys to Attracting the Newest HPC Talent – Post-Millennials

[Connect with HPC users and learn new skills in the IBM Spectrum LSF User Community.]

For engineers and scientists growing up in the 80s, the current state of HPC makes perfect sense. Read more…

Singularity Moves Up the Container Value Chain

August 20, 2019

The enterprise version of the Singularity HPC container platform released this week by Sylabs is designed to allow users to create, secure and share the high-end containers in self-hosted production deployments. The e Read more…

By George Leopold

At Long Last, Supercomputing Helps to Map the Poles

August 22, 2019

For years,” Paul Morin wrote, “those of us that made maps of the Poles apologized. We apologized for the blank spaces on maps, we apologized for mountains being in the wrong place... Read more…

By Oliver Peckham

IBM Deepens Plunge into Open Source; OpenPOWER to Join Linux Foundation

August 20, 2019

IBM today announced it was contributing the instruction set (ISA) for its Power microprocessor and the designs for the Open Coherent Accelerator Processor Inter Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Scientists to Tap Exascale Computing to Unlock the Mystery of our Accelerating Universe

August 14, 2019

The universe and everything in it roared to life with the Big Bang approximately 13.8 billion years ago. It has continued expanding ever since. While we have a Read more…

By Rob Johnson

AI is the Next Exascale – Rick Stevens on What that Means and Why It’s Important

August 13, 2019

Twelve years ago the Department of Energy (DOE) was just beginning to explore what an exascale computing program might look like and what it might accomplish. Today, DOE is repeating that process for AI, once again starting with science community town halls to gather input and stimulate conversation. The town hall program... Read more…

By Tiffany Trader and John Russell

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

Lenovo Drives Single-Socket Servers with AMD Epyc Rome CPUs

August 7, 2019

No summer doldrums here. As part of the AMD Epyc Rome launch event in San Francisco today, Lenovo announced two new single-socket servers, the ThinkSystem SR635 Read more…

By Doug Black

High Performance (Potato) Chips

May 5, 2006

In this article, we focus on how Procter & Gamble is using high performance computing to create some common, everyday supermarket products. Tom Lange, a 27-year veteran of the company, tells us how P&G models products, processes and production systems for the betterment of consumer package goods. Read more…

By Michael Feldman

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Cray, AMD to Extend DOE’s Exascale Frontier

May 7, 2019

Cray and AMD are coming back to Oak Ridge National Laboratory to partner on the world’s largest and most expensive supercomputer. The Department of Energy’s Read more…

By Tiffany Trader

Graphene Surprises Again, This Time for Quantum Computing

May 8, 2019

Graphene is fascinating stuff with promise for use in a seeming endless number of applications. This month researchers from the University of Vienna and Institu Read more…

By John Russell

AMD Verifies Its Largest 7nm Chip Design in Ten Hours

June 5, 2019

AMD announced last week that its engineers had successfully executed the first physical verification of its largest 7nm chip design – in just ten hours. The AMD Radeon Instinct Vega20 – which boasts 13.2 billion transistors – was tested using a TSMC-certified Calibre nmDRC software platform from Mentor. Read more…

By Oliver Peckham

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

Deep Learning Competitors Stalk Nvidia

May 14, 2019

There is no shortage of processing architectures emerging to accelerate deep learning workloads, with two more options emerging this week to challenge GPU leader Nvidia. First, Intel researchers claimed a new deep learning record for image classification on the ResNet-50 convolutional neural network. Separately, Israeli AI chip startup Hailo.ai... Read more…

By George Leopold

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Nvidia Embraces Arm, Declares Intent to Accelerate All CPU Architectures

June 17, 2019

As the Top500 list was being announced at ISC in Frankfurt today with an upgraded petascale Arm supercomputer in the top third of the list, Nvidia announced its Read more…

By Tiffany Trader

Top500 Purely Petaflops; US Maintains Performance Lead

June 17, 2019

With the kick-off of the International Supercomputing Conference (ISC) in Frankfurt this morning, the 53rd Top500 list made its debut, and this one's for petafl Read more…

By Tiffany Trader

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

A Behind-the-Scenes Look at the Hardware That Powered the Black Hole Image

June 24, 2019

Two months ago, the first-ever image of a black hole took the internet by storm. A team of scientists took years to produce and verify the striking image – an Read more…

By Oliver Peckham

Cray – and the Cray Brand – to Be Positioned at Tip of HPE’s HPC Spear

May 22, 2019

More so than with most acquisitions of this kind, HPE’s purchase of Cray for $1.3 billion, announced last week, seems to have elements of that overused, often Read more…

By Doug Black and Tiffany Trader

Chinese Company Sugon Placed on US ‘Entity List’ After Strong Showing at International Supercomputing Conference

June 26, 2019

After more than a decade of advancing its supercomputing prowess, operating the world’s most powerful supercomputer from June 2013 to June 2018, China is keep Read more…

By Tiffany Trader

Qualcomm Invests in RISC-V Startup SiFive

June 7, 2019

Investors are zeroing in on the open standard RISC-V instruction set architecture and the processor intellectual property being developed by a batch of high-flying chip startups. Last fall, Esperanto Technologies announced a $58 million funding round. Read more…

By George Leopold

Intel 7nm GPU on Roadmap for 2021, OneAPI Coming This Year

May 8, 2019

At Intel's investor meeting today in Santa Clara, Calif., the company filled in details of its roadmap and product launch plans and sought to allay concerns about delays of its 10nm chips. In laying out its 10nm and 7nm timelines, Intel revealed that its first 7nm product would be... Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This