Alternative Supercomputing or How to Misuse a Computer

By Tiffany Trader

July 14, 2016

In 2008, the IBM Roadrunner supercomputer broke the petaflops barrier using the power of the heterogeneous Sony Cell Broadband Engine (BE) processor. A year prior, the Cell BE had already made its way into the consumer market as the engine inside the SonyPlaystation 3. The PS3’s accelerated design, Linux-capability and low price point inspired several organizations, including the United States Air Force Research Laboratory, to build Linux clusters out of the gaming machines. While the 2010 AFRL “Condor” deployment was the largest of these efforts, stringing together 1,760 Sony PlayStation 3 boxes for an estimated 500 teraflops of performance, the PS3 cluster effort begun at the University of Massachusetts Dartmouth in 2007, is likely the longest-running.

Gaurav Khanna, a professor in the physics department at UMass Dartmouth, built his first Cell-based cluster with eight PS3s in 2007. The “PS3 Gravity Grid” was used to perform research grade simulations of black hole systems and was the first cluster to generate published scientific results. With support from Sony, the cluster was later expanded to 16 PS3s. In 2014, the Dartmouth team, under the direction of Khanna, created yet another cluster out of 308 Sony PS3 gaming consoles and a refrigerated shipping container using hardware donated from the AFRL effort. Khanna reports that these PS3s are still being used and are delivering performance and performance-per-watt on par with unaccelerated Xeon boxes. UMass Dartmouth is a net energy producer, which makes using this older silicon more feasible than it otherwise might be. More on this to follow.

gkhanna headshot
Gaurav Khanna

After Sony locked down the PS3 OS in 2010 in response to a hacking incident, much of the enthusiasm for the gaming-based clusters fell by the wayside, but Khanna still champions using low-cost consumer (and now mobile) hardware for scientific computing.

Recently the professor began looking into other technologies in the same spirit of the PlayStation 3, which aside from providing efficient number-crunching had the economics of being a mass-manufactured consumer device with a heavily discounted pricing model. On account of competition with Microsoft, Sony was selling the PS3s for about half what it cost to make them.

“I’m interested in a cheap device that’s mass manufactured that’s reliable and is high-performance,” says Khanna, “And I think that naturally brings you to two things: video gaming cards like NVIDIA GeForce or the AMD Radeons and mobile chips such as ARM.”

Khanna studied the SBC (single board computing) space including the Raspberry Pi for a suitable platform. “While it’s true that the Pis sip power,” he said, “being in the few hundred megaflops range each, you would have to have so many of them with so many power supplies, cables, network cards and switches to get some substantial performance that it’s just not worth the hassle.”

A couple years ago he began experimenting with using AMD Radeon cards to crunch some of his astrophysics codes. With the assistance of students, he had already created OpenCL versions and CUDA versions of his codes, and the Radeon of course supports OpenCL. He reports being impressed by the performance and reliability of the cards. Of the roughly two dozen cards crunching scientific work for the last two years, basically non-stop 24/7, there’s only been two or three that failed, he says. The latest version cards he’s acquired are the Radeon R9 Fury X, which provide 8.6 teraflops of single-precision floating point computing power and 512 GB/s of memory bandwidth for about $460.

Khanna, a theoretical physicist cum computational scientist who studies the internal workings of black holes and uses theory and computer simulations to predict gravitational wave radiation, says it’s his research into understanding black holes that has benefited most from his use of consumer-class silicon.

“I’m very interested in what happens inside a black hole. There has been a fair amount of work on that over the decades, but a lot of questions still remain on what actually happens if you’re inside a black hole and what kind of effects you could expect to observe and expect to feel and whether there’s a singularity event that happens,” says Khanna. “Those are the kind of codes that I mapped onto the PS3 architecture.”

He explains that because his original code was optimized for the Cell, the move to GPUs was straightforward. “The break up [in the code] was similar,” he says. “You’ve got the part that needs the I/O done, the CPU, and you’ve got the parallel part that’s going to be done on the GPU, or the Synergistic Processing Elements (SPEs) in the case of the Cell.”

The codes are well-suited for a FP32-dominant processor although parts do need FP64, so Khanna employs mixed precision. At a one-sixteenth ratio, the AMD R9 Fury X card has only about a half teraflops of spec’d FP64 performance, but Khanna says this is sufficient for his needs.

“From a cost perspective, I think the AMD cards are a no brainer – for under $500, you get a few teraflops of performance on this consumer device – while the high-end NVIDIA Tesla products [specifically targeted at HPC] go for a few thousand each,” he says.

Budget realities are what got Khanna started on the path to misusing compute for science as he puts it. “Theoretical physics is an esoteric science without direct implication on high-priority areas like public health, energy and so forth, so has been an underfunded area,” he says. “We do the most with the resources we have – and that has been a primary driver for pretty much my entire career — to find creative ways to do what we need to do but do it cheaply.”

“I think the main reason why more people haven’t leveraged these AMD Radeon cards for compute is because people are so comfortable with CUDA,” Khanna opines, “but once you have CUDA version it’s not that difficult to develop an OpenCL version. It’s a bit more complex, but if you have a CUDA version I would say you’re most of the way there already. Then there’s also the new tools from AMD that help you switch back and forth.”

UMass Dartmouth Elroy2While he was experimenting with the Radeon gaming boards, Khanna also wanted to implement a cluster with a mobile platform. He was looking for something sufficiently powerful yet energy-efficient with support for either CUDA or OpenCL. “If you keep those constraints in line, you find there are really two nice viable platforms – one is the NVIDIA Tegra, which of course supports CUDA, and the other is ODROID boards, developed by Hardkernel, a South Korean purveyor of open-source hardware. The boards use Samsung processors and an ARM Mali GPU that supports OpenCL.”

Khanna ended up going with the Tegra X1 series SoC from NVIDIA, in part because he had several colleagues whose codes were better suited for the CUDA framework. He was also impressed with a stated peak performance (single-precision) of 512 gigaflops per card.

In May, UMass Dartmouth’s Center for Scientific Computing & Visualization Research (CSCVR) purchased 32 of these cards at roughly a 50 percent discount from NVIDIA. The total performance of the new cluster, dubbed “Elroy,” is a little over 16 teraflops and it draws only 300 watts of power.

“In terms of power efficiency, the spec’d numbers turn out to about 50 gigaflops per watt,” he says, sticking with the FP32 metrics. “If you compare with a traditional cluster, it would be a few gigaflops per watt for a CPU-only architecture. A cluster with GPUs consumes about 10-15 gigaflops per watt. So we’re talking up to five times more performance-per-watt than your typical GPU-accelerated computer, which is what I was hoping for. In mobile devices, people want longer battery life so a lot of innovation is going into performance-per-watt.”

At this time, Khanna has his codes up and running on Elroy and has completed some benchmarking studies, which he and a colleague detail in their new paper, Scientific Computing Using Consumer Video Gaming Hardware Devices. (The paper has yet to be published but a pre-print copy is available here.)

The move to the Tegra-X1 platform was an easier transition than the Cell, reports Khanna, because, well, Fortran. “Even though the Cell could run Fortran, and you could use Fortran to run the code on the accelerator cores, there was no bridging possibility,” he explains. “To bring the communication between the two, you had to go to C. So I actually had to write this fairly funky C-based bridge code just to be able to have the two devices communicate in flight. It was ugly.”

“A student that was working on the GPU port rewrote the entire code in C,” he says. “So now we’ve had for a few years a C/C++ code that works much better than this old Fortran code bridged with C, which was kind of a mess.”

Although he’s very pleased with the performance and energy-efficiency metrics of the Tegra-based “Elroy,” Khanna doesn’t think the mobile device experiment, as he refers to it, was that advantageous from a cost perspective. Although he received a nice discount, the boards have a full sticker price of around $600 each while the ODROID boards are $60 and offer about one-fifth the FP32 performance, so potentially a 2X performance per dollar savings. Of course, peak floating point performance does not tell the whole story, but Khanna is optimistic about the ODROID prospects.

“I think if we had done the ODROIDs instead, that would have been more attractive from a cost perspective, and in fact I think we are going to build a cluster with those ODROID Samsung boards as well for comparison’s sake,” he shares.

Khanna maintains that with the right components he can achieve a factor of five or better on performance per watt and performance per dollar over more traditional server silicon. “All we’re doing is misusing these platforms to do constructive science,” he says.

Free Power, Free Space

One reason Khanna has been able to hold on to older architectures rather than having to rip and replace is due to the rather unique power and datacenter situation at UMass Dartmouth. Being on the south coast of Massachusetts, UMass Dartmouth benefits from ample wind power and a natural gas co-generation facility. The campus recently became power sufficient but cannot sell power back to the grid – which means it actually generates an excess of power.

“So power actually is free on our campus and we are lucky in that way,” Khanna acknowledges. “Cooling ties into that equation as well since there is sufficient energy for the task.”

As for space, virtualization has freed up the IT footprint in the datacenter. It used to occupy an entire datacenter, but now the university’s student services now runs off a very small virtual cluster. “Our datacenter has slowly transformed itself into a research computing datacenter and that’s where all my hardware goes,” adds Khanna. “As they free up space, I get a chance to access it.”

The Power of Experimentation

A constrained budget wasn’t the only thing motivating Khanna to pursue alternative supercomputing platforms; he was also driven by a strong belief in the benefits of local compute resources. He says that the original PS3 cluster effort took place at a time when Teragrid (the precursor to XSEDE) was experiencing large fluctuations in terms of supply and demand. “Demand was far outpacing supply at the time,” he reports. “When you did have time, and you submitted a job, there were long wait times. It was getting to the point that jobs took longer in the queue than they took to run and that’s not a productive way to function – you want those times to at least be comparable if not have the queue time be less. I started to think about what is the best way to build my own cluster locally, cheaply.”

The situation improved when the NSF built Blue Waters and added additional systems to the XSEDE infrastructure. It was the GPU-heavy Keeneland project clusters at Georgia Tech that Khanna got the most mileage from. When those systems were retired in April 2015 after 5.5 years of service, Khanna was motivated to start searching for a cheap GPU or mobile board that would satisfy his need for cheap accelerated local compute.

Says Khanna, “The reliance on something like federal supercomputing sites is not a great way – at least in my experience – to be productive in the long run because it varies so much based on what’s available and what the demand is. It’s good to have local resources and that’s one of the drivers for me doing this here is to have independent resources. That has enabled me and my colleagues and me to do things we really couldn’t do before.”

It would take months to run certain models on the shared systems, according to Khanna, including wait times, and now they can do them in an hour or even a few minutes on local resources. He adds that it’s also very useful for the students to have a local machine because it encourages experimentation and skill development.

“One thing that I feel really painful about the shared federal sites is you get some time – and because your time is so budgeted – there is a disincentive to try different things, to experiment. Plus typically you don’t get what you ask for, you get usually half or three-fourths of what you wanted. You never want to be in a position in your research to not be able to just mess around. I want to see what happens if I just tweak different parameters. If you get to the point where you start thinking ‘is this run worth submitting,’ that’s where I think scientific productivity goes down,” he says.

“Often times you make discoveries in science when you have made a mistake or you were just for the sake of trying something, something bizarre and you learn something from that. Or your code crashes and you make mistakes and you learn something interesting from the forensics. If you are constantly worrying about is this going to cost me, is this worth the submission, am I going to lose too much supercomputing time for this job – I think is a detriment to doing good science. That’s where I feel that our local experimental clusters are hugely useful.

“For production level applications and code, when you’ve got something running and you want to run a thousand cases, I think it’s perfectly fine to use supercomputing time at a larger facility, but you don’t have the luxury there to just mess around and that’s where I think a lot of real science happens.”

Khanna takes the same issue with the cloud model, which has the added barrier of a pay wall. “While it’s great for production research, I feel that is discouraging for science,” he says.

The physics professor says he’s witnessed a shift toward more of a business model of accountability with shared resources and the cloud being increasingly viewed by funding bodies as a way to manage flat budgets. Khanna is doing what he can to buck this trend. “Of course all campuses are budget-constrained and are directing users toward the cloud with pay-per-use or toward shared federal resources but as much as I can push back, I will because the ability to just harmlessly experiment is so important for real science; if you omit that you lose a lot and I hope that doesn’t happen.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

What’s New in HPC Research: Dark Matter, Arrhythmia, Sustainability & More

February 28, 2020

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Microsoft Announces General Availability of AMD-backed Azure HBv2 Instances for HPC

February 27, 2020

Nearly seven months after they were first announced, Microsoft Azure’s HPC-targeted HBv2 virtual machines (VMs) based on AMD second-generation Epyc processors are ready for primetime. The new VMs, which Azure claims of Read more…

By Staff report

Sequoia Decommissioned, Making Room for El Capitan

February 27, 2020

After eight years of service, Sequoia has been felled. Once the most powerful publicly ranked supercomputer in the world, Sequoia – hosted by Lawrence Livermore National Laboratory (LLNL) – has been decommissioned to Read more…

By Oliver Peckham

Quantum Bits: Q-Ctrl, D-Wave Start News Flow on Eve of APS March Meeting

February 27, 2020

The annual trickle of quantum computing news during the lead-up to next week’s APS March Meeting 2020 has begun. Yesterday D-Wave introduced a significant upgrade to its quantum portal and tool suite, Leap2. Today quantum computing start-up Q-Ctrl announced the beta release of its ‘professional-grade’ tool Boulder Opal software... Read more…

By John Russell

Blue Waters Supercomputer Helps Tackle Pandemic Flu Simulations

February 26, 2020

While not the novel coronavirus that is now sweeping across the world, the 2009 H1N1 flu pandemic (pH1N1) infected up to 21 percent of the global population and killed over 200,000 people. Now, a team of researchers from Read more…

By Staff report

AWS Solution Channel

Amazon FSx for Lustre Update: Persistent Storage for Long-Term, High-Performance Workloads

Last year I wrote about Amazon FSx for Lustre and told you how our customers can use it to create pebibyte-scale, highly parallel POSIX-compliant file systems that serve thousands of simultaneous clients driving millions of IOPS (Input/Output Operations per Second) with sub-millisecond latency. Read more…

IBM Accelerated Insights

Intelligent HPC – Keeping Hard Work at Bay(es)

Since the dawn of time, humans have looked for ways to make their lives easier. Over the centuries human ingenuity has given us inventions such as the wheel and simple machines – which help greatly with tasks that would otherwise be extremely laborious. Read more…

Micron Accelerator Bumps Up Memory Bandwidth

February 26, 2020

Deep learning accelerators based on chip architectures coupled with high-bandwidth memory are emerging to enable near real-time processing of machine learning algorithms. Memory chip specialist Micron Technology argues t Read more…

By George Leopold

Quantum Bits: Q-Ctrl, D-Wave Start News Flow on Eve of APS March Meeting

February 27, 2020

The annual trickle of quantum computing news during the lead-up to next week’s APS March Meeting 2020 has begun. Yesterday D-Wave introduced a significant upgrade to its quantum portal and tool suite, Leap2. Today quantum computing start-up Q-Ctrl announced the beta release of its ‘professional-grade’ tool Boulder Opal software... Read more…

By John Russell

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

NOAA Lays Out Aggressive New AI Strategy

February 24, 2020

Roughly coincident with last week’s announcement of a planned tripling of its compute capacity, the National Oceanic and Atmospheric Administration issued an Read more…

By John Russell

New Supercomputer Cooling Method Saves Half-Million Gallons of Water at Sandia National Laboratories

February 24, 2020

A new cooling method for supercomputer systems is picking up steam – literally. After saving millions of gallons of water at a National Renewable Energy Laboratory (NREL) datacenter, this innovative approach, called... Read more…

By Oliver Peckham

University of Stuttgart Inaugurates ‘Hawk’ Supercomputer

February 20, 2020

This week, the new “Hawk” supercomputer was inaugurated in a ceremony at the High-Performance Computing Center of the University of Stuttgart (HLRS). Offici Read more…

By Staff report

US to Triple Its Supercomputing Capacity for Weather and Climate with Two New Crays

February 20, 2020

The blizzard of news around the race for weather and climate supercomputing leadership continues. Just three days after the UK announced a £1.2 billion plan to build the world’s largest weather and climate supercomputer, the U.S. National Oceanic and Atmospheric Administration... Read more…

By Oliver Peckham

Japan’s AIST Benchmarks Intel Optane; Cites Benefit for HPC and AI

February 19, 2020

Last April Intel released its Optane Data Center Persistent Memory Module (DCPMM) – byte addressable nonvolatile memory – to increase main memory capacity a Read more…

By John Russell

UK Announces £1.2 Billion Weather and Climate Supercomputer

February 19, 2020

While the planet is heating up, so is the race for global leadership in weather and climate computing. In a bombshell announcement, the UK government revealed p Read more…

By Oliver Peckham

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, t Read more…

By John Russell

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

SC19: IBM Changes Its HPC-AI Game Plan

November 25, 2019

It’s probably fair to say IBM is known for big bets. Summit supercomputer – a big win. Red Hat acquisition – looking like a big win. OpenPOWER and Power processors – jury’s out? At SC19, long-time IBMer Dave Turek sketched out a different kind of bet for Big Blue – a small ball strategy, if you’ll forgive the baseball analogy... Read more…

By John Russell

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

IBM Unveils Latest Achievements in AI Hardware

December 13, 2019

“The increased capabilities of contemporary AI models provide unprecedented recognition accuracy, but often at the expense of larger computational and energet Read more…

By Oliver Peckham

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

51,000 Cloud GPUs Converge to Power Neutrino Discovery at the South Pole

November 22, 2019

At the dead center of the South Pole, thousands of sensors spanning a cubic kilometer are buried thousands of meters beneath the ice. The sensors are part of Ic Read more…

By Oliver Peckham

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Jensen Huang’s SC19 – Fast Cars, a Strong Arm, and Aiming for the Cloud(s)

November 20, 2019

We’ve come to expect Nvidia CEO Jensen Huang’s annual SC keynote to contain stunning graphics and lively bravado (with plenty of examples) in support of GPU Read more…

By John Russell

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

Top500: US Maintains Performance Lead; Arm Tops Green500

November 18, 2019

The 54th Top500, revealed today at SC19, is a familiar list: the U.S. Summit (ORNL) and Sierra (LLNL) machines, offering 148.6 and 94.6 petaflops respectively, Read more…

By Tiffany Trader

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed instances for storage workloads. The fourth-generation Azure D-series and E-series virtual machines previewed at the Rome launch in August are now generally available. Read more…

By Tiffany Trader

IBM Debuts IC922 Power Server for AI Inferencing and Data Management

January 28, 2020

IBM today launched a Power9-based inference server – the IC922 – that features up to six Nvidia T4 GPUs, PCIe Gen 4 and OpenCAPI connectivity, and can accom Read more…

By John Russell

Intel’s New Hyderabad Design Center Targets Exascale Era Technologies

December 3, 2019

Intel's Raja Koduri was in India this week to help launch a new 300,000 square foot design and engineering center in Hyderabad, which will focus on advanced com Read more…

By Tiffany Trader

In Memoriam: Steve Tuecke, Globus Co-founder

November 4, 2019

HPCwire is deeply saddened to report that Steve Tuecke, longtime scientist at Argonne National Lab and University of Chicago, has passed away at age 52. Tuecke Read more…

By Tiffany Trader

Microsoft Azure Adds Graphcore’s IPU

November 15, 2019

Graphcore, the U.K. AI chip developer, is expanding collaboration with Microsoft to offer its intelligent processing units on the Azure cloud, making Microsoft Read more…

By George Leopold

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This