Alternative Supercomputing or How to Misuse a Computer

By Tiffany Trader

July 14, 2016

In 2008, the IBM Roadrunner supercomputer broke the petaflops barrier using the power of the heterogeneous Sony Cell Broadband Engine (BE) processor. A year prior, the Cell BE had already made its way into the consumer market as the engine inside the SonyPlaystation 3. The PS3’s accelerated design, Linux-capability and low price point inspired several organizations, including the United States Air Force Research Laboratory, to build Linux clusters out of the gaming machines. While the 2010 AFRL “Condor” deployment was the largest of these efforts, stringing together 1,760 Sony PlayStation 3 boxes for an estimated 500 teraflops of performance, the PS3 cluster effort begun at the University of Massachusetts Dartmouth in 2007, is likely the longest-running.

Gaurav Khanna, a professor in the physics department at UMass Dartmouth, built his first Cell-based cluster with eight PS3s in 2007. The “PS3 Gravity Grid” was used to perform research grade simulations of black hole systems and was the first cluster to generate published scientific results. With support from Sony, the cluster was later expanded to 16 PS3s. In 2014, the Dartmouth team, under the direction of Khanna, created yet another cluster out of 308 Sony PS3 gaming consoles and a refrigerated shipping container using hardware donated from the AFRL effort. Khanna reports that these PS3s are still being used and are delivering performance and performance-per-watt on par with unaccelerated Xeon boxes. UMass Dartmouth is a net energy producer, which makes using this older silicon more feasible than it otherwise might be. More on this to follow.

gkhanna headshot
Gaurav Khanna

After Sony locked down the PS3 OS in 2010 in response to a hacking incident, much of the enthusiasm for the gaming-based clusters fell by the wayside, but Khanna still champions using low-cost consumer (and now mobile) hardware for scientific computing.

Recently the professor began looking into other technologies in the same spirit of the PlayStation 3, which aside from providing efficient number-crunching had the economics of being a mass-manufactured consumer device with a heavily discounted pricing model. On account of competition with Microsoft, Sony was selling the PS3s for about half what it cost to make them.

“I’m interested in a cheap device that’s mass manufactured that’s reliable and is high-performance,” says Khanna, “And I think that naturally brings you to two things: video gaming cards like NVIDIA GeForce or the AMD Radeons and mobile chips such as ARM.”

Khanna studied the SBC (single board computing) space including the Raspberry Pi for a suitable platform. “While it’s true that the Pis sip power,” he said, “being in the few hundred megaflops range each, you would have to have so many of them with so many power supplies, cables, network cards and switches to get some substantial performance that it’s just not worth the hassle.”

A couple years ago he began experimenting with using AMD Radeon cards to crunch some of his astrophysics codes. With the assistance of students, he had already created OpenCL versions and CUDA versions of his codes, and the Radeon of course supports OpenCL. He reports being impressed by the performance and reliability of the cards. Of the roughly two dozen cards crunching scientific work for the last two years, basically non-stop 24/7, there’s only been two or three that failed, he says. The latest version cards he’s acquired are the Radeon R9 Fury X, which provide 8.6 teraflops of single-precision floating point computing power and 512 GB/s of memory bandwidth for about $460.

Khanna, a theoretical physicist cum computational scientist who studies the internal workings of black holes and uses theory and computer simulations to predict gravitational wave radiation, says it’s his research into understanding black holes that has benefited most from his use of consumer-class silicon.

“I’m very interested in what happens inside a black hole. There has been a fair amount of work on that over the decades, but a lot of questions still remain on what actually happens if you’re inside a black hole and what kind of effects you could expect to observe and expect to feel and whether there’s a singularity event that happens,” says Khanna. “Those are the kind of codes that I mapped onto the PS3 architecture.”

He explains that because his original code was optimized for the Cell, the move to GPUs was straightforward. “The break up [in the code] was similar,” he says. “You’ve got the part that needs the I/O done, the CPU, and you’ve got the parallel part that’s going to be done on the GPU, or the Synergistic Processing Elements (SPEs) in the case of the Cell.”

The codes are well-suited for a FP32-dominant processor although parts do need FP64, so Khanna employs mixed precision. At a one-sixteenth ratio, the AMD R9 Fury X card has only about a half teraflops of spec’d FP64 performance, but Khanna says this is sufficient for his needs.

“From a cost perspective, I think the AMD cards are a no brainer – for under $500, you get a few teraflops of performance on this consumer device – while the high-end NVIDIA Tesla products [specifically targeted at HPC] go for a few thousand each,” he says.

Budget realities are what got Khanna started on the path to misusing compute for science as he puts it. “Theoretical physics is an esoteric science without direct implication on high-priority areas like public health, energy and so forth, so has been an underfunded area,” he says. “We do the most with the resources we have – and that has been a primary driver for pretty much my entire career — to find creative ways to do what we need to do but do it cheaply.”

“I think the main reason why more people haven’t leveraged these AMD Radeon cards for compute is because people are so comfortable with CUDA,” Khanna opines, “but once you have CUDA version it’s not that difficult to develop an OpenCL version. It’s a bit more complex, but if you have a CUDA version I would say you’re most of the way there already. Then there’s also the new tools from AMD that help you switch back and forth.”

UMass Dartmouth Elroy2While he was experimenting with the Radeon gaming boards, Khanna also wanted to implement a cluster with a mobile platform. He was looking for something sufficiently powerful yet energy-efficient with support for either CUDA or OpenCL. “If you keep those constraints in line, you find there are really two nice viable platforms – one is the NVIDIA Tegra, which of course supports CUDA, and the other is ODROID boards, developed by Hardkernel, a South Korean purveyor of open-source hardware. The boards use Samsung processors and an ARM Mali GPU that supports OpenCL.”

Khanna ended up going with the Tegra X1 series SoC from NVIDIA, in part because he had several colleagues whose codes were better suited for the CUDA framework. He was also impressed with a stated peak performance (single-precision) of 512 gigaflops per card.

In May, UMass Dartmouth’s Center for Scientific Computing & Visualization Research (CSCVR) purchased 32 of these cards at roughly a 50 percent discount from NVIDIA. The total performance of the new cluster, dubbed “Elroy,” is a little over 16 teraflops and it draws only 300 watts of power.

“In terms of power efficiency, the spec’d numbers turn out to about 50 gigaflops per watt,” he says, sticking with the FP32 metrics. “If you compare with a traditional cluster, it would be a few gigaflops per watt for a CPU-only architecture. A cluster with GPUs consumes about 10-15 gigaflops per watt. So we’re talking up to five times more performance-per-watt than your typical GPU-accelerated computer, which is what I was hoping for. In mobile devices, people want longer battery life so a lot of innovation is going into performance-per-watt.”

At this time, Khanna has his codes up and running on Elroy and has completed some benchmarking studies, which he and a colleague detail in their new paper, Scientific Computing Using Consumer Video Gaming Hardware Devices. (The paper has yet to be published but a pre-print copy is available here.)

The move to the Tegra-X1 platform was an easier transition than the Cell, reports Khanna, because, well, Fortran. “Even though the Cell could run Fortran, and you could use Fortran to run the code on the accelerator cores, there was no bridging possibility,” he explains. “To bring the communication between the two, you had to go to C. So I actually had to write this fairly funky C-based bridge code just to be able to have the two devices communicate in flight. It was ugly.”

“A student that was working on the GPU port rewrote the entire code in C,” he says. “So now we’ve had for a few years a C/C++ code that works much better than this old Fortran code bridged with C, which was kind of a mess.”

Although he’s very pleased with the performance and energy-efficiency metrics of the Tegra-based “Elroy,” Khanna doesn’t think the mobile device experiment, as he refers to it, was that advantageous from a cost perspective. Although he received a nice discount, the boards have a full sticker price of around $600 each while the ODROID boards are $60 and offer about one-fifth the FP32 performance, so potentially a 2X performance per dollar savings. Of course, peak floating point performance does not tell the whole story, but Khanna is optimistic about the ODROID prospects.

“I think if we had done the ODROIDs instead, that would have been more attractive from a cost perspective, and in fact I think we are going to build a cluster with those ODROID Samsung boards as well for comparison’s sake,” he shares.

Khanna maintains that with the right components he can achieve a factor of five or better on performance per watt and performance per dollar over more traditional server silicon. “All we’re doing is misusing these platforms to do constructive science,” he says.

Free Power, Free Space

One reason Khanna has been able to hold on to older architectures rather than having to rip and replace is due to the rather unique power and datacenter situation at UMass Dartmouth. Being on the south coast of Massachusetts, UMass Dartmouth benefits from ample wind power and a natural gas co-generation facility. The campus recently became power sufficient but cannot sell power back to the grid – which means it actually generates an excess of power.

“So power actually is free on our campus and we are lucky in that way,” Khanna acknowledges. “Cooling ties into that equation as well since there is sufficient energy for the task.”

As for space, virtualization has freed up the IT footprint in the datacenter. It used to occupy an entire datacenter, but now the university’s student services now runs off a very small virtual cluster. “Our datacenter has slowly transformed itself into a research computing datacenter and that’s where all my hardware goes,” adds Khanna. “As they free up space, I get a chance to access it.”

The Power of Experimentation

A constrained budget wasn’t the only thing motivating Khanna to pursue alternative supercomputing platforms; he was also driven by a strong belief in the benefits of local compute resources. He says that the original PS3 cluster effort took place at a time when Teragrid (the precursor to XSEDE) was experiencing large fluctuations in terms of supply and demand. “Demand was far outpacing supply at the time,” he reports. “When you did have time, and you submitted a job, there were long wait times. It was getting to the point that jobs took longer in the queue than they took to run and that’s not a productive way to function – you want those times to at least be comparable if not have the queue time be less. I started to think about what is the best way to build my own cluster locally, cheaply.”

The situation improved when the NSF built Blue Waters and added additional systems to the XSEDE infrastructure. It was the GPU-heavy Keeneland project clusters at Georgia Tech that Khanna got the most mileage from. When those systems were retired in April 2015 after 5.5 years of service, Khanna was motivated to start searching for a cheap GPU or mobile board that would satisfy his need for cheap accelerated local compute.

Says Khanna, “The reliance on something like federal supercomputing sites is not a great way – at least in my experience – to be productive in the long run because it varies so much based on what’s available and what the demand is. It’s good to have local resources and that’s one of the drivers for me doing this here is to have independent resources. That has enabled me and my colleagues and me to do things we really couldn’t do before.”

It would take months to run certain models on the shared systems, according to Khanna, including wait times, and now they can do them in an hour or even a few minutes on local resources. He adds that it’s also very useful for the students to have a local machine because it encourages experimentation and skill development.

“One thing that I feel really painful about the shared federal sites is you get some time – and because your time is so budgeted – there is a disincentive to try different things, to experiment. Plus typically you don’t get what you ask for, you get usually half or three-fourths of what you wanted. You never want to be in a position in your research to not be able to just mess around. I want to see what happens if I just tweak different parameters. If you get to the point where you start thinking ‘is this run worth submitting,’ that’s where I think scientific productivity goes down,” he says.

“Often times you make discoveries in science when you have made a mistake or you were just for the sake of trying something, something bizarre and you learn something from that. Or your code crashes and you make mistakes and you learn something interesting from the forensics. If you are constantly worrying about is this going to cost me, is this worth the submission, am I going to lose too much supercomputing time for this job – I think is a detriment to doing good science. That’s where I feel that our local experimental clusters are hugely useful.

“For production level applications and code, when you’ve got something running and you want to run a thousand cases, I think it’s perfectly fine to use supercomputing time at a larger facility, but you don’t have the luxury there to just mess around and that’s where I think a lot of real science happens.”

Khanna takes the same issue with the cloud model, which has the added barrier of a pay wall. “While it’s great for production research, I feel that is discouraging for science,” he says.

The physics professor says he’s witnessed a shift toward more of a business model of accountability with shared resources and the cloud being increasingly viewed by funding bodies as a way to manage flat budgets. Khanna is doing what he can to buck this trend. “Of course all campuses are budget-constrained and are directing users toward the cloud with pay-per-use or toward shared federal resources but as much as I can push back, I will because the ability to just harmlessly experiment is so important for real science; if you omit that you lose a lot and I hope that doesn’t happen.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

ASC18: Final Results Revealed & Wrapped Up

May 17, 2018

It was an exciting week at ASC18 in Nanyang, China. The student teams braved extreme heat, extremely difficult applications, and extreme competition in order to cross the cluster competition finish line. The gala awards ceremony took place on Wednesday. The auditorium was packed with student teams, various dignitaries, the media, and other interested parties. So what happened? Read more…

By Dan Olds

ASC18: Tough Applications & Tough Luck

May 17, 2018

The applications at the ASC18 Student Cluster Competition were tough. Tougher than the $3.99 steak special at your local greasy spoon restaurant. The apps are so tough that even Chuck Norris backs away from them slowly. Read more…

By Dan Olds

Spring Meetings Underscore Quantum Computing’s Rise

May 17, 2018

The month of April 2018 saw four very important and interesting meetings to discuss the state of quantum computing technologies, their potential impacts, and the technology challenges ahead. These discussions happened in Read more…

By Alex R. Larzelere

HPE Extreme Performance Solutions

HPC and AI Convergence is Accelerating New Levels of Intelligence

Data analytics is the most valuable tool in the digital marketplace – so much so that organizations are employing high performance computing (HPC) capabilities to rapidly collect, share, and analyze endless streams of data. Read more…

IBM Accelerated Insights

Mastering the Big Data Challenge in Cognitive Healthcare

Patrick Chain, genomics researcher at Los Alamos National Laboratory, posed a question in a recent blog: What if a nurse could swipe a patient’s saliva and run a quick genetic test to determine if the patient’s sore throat was caused by a cold virus or a bacterial infection? Read more…

Quantum Network Hub Opens in Japan

May 17, 2018

Following on the launch of its Q Commercial quantum network last December with 12 industrial and academic partners, the official Japanese hub at Keio University is now open to facilitate the exploration of quantum applications important to science and business. The news comes a week after IBM announced that North Carolina State University was the first U.S. university to join its Q Network. Read more…

By Tiffany Trader

ASC18: Final Results Revealed & Wrapped Up

May 17, 2018

It was an exciting week at ASC18 in Nanyang, China. The student teams braved extreme heat, extremely difficult applications, and extreme competition in order to cross the cluster competition finish line. The gala awards ceremony took place on Wednesday. The auditorium was packed with student teams, various dignitaries, the media, and other interested parties. So what happened? Read more…

By Dan Olds

Spring Meetings Underscore Quantum Computing’s Rise

May 17, 2018

The month of April 2018 saw four very important and interesting meetings to discuss the state of quantum computing technologies, their potential impacts, and th Read more…

By Alex R. Larzelere

Quantum Network Hub Opens in Japan

May 17, 2018

Following on the launch of its Q Commercial quantum network last December with 12 industrial and academic partners, the official Japanese hub at Keio University is now open to facilitate the exploration of quantum applications important to science and business. The news comes a week after IBM announced that North Carolina State University was the first U.S. university to join its Q Network. Read more…

By Tiffany Trader

Democratizing HPC: OSC Releases Version 1.3 of OnDemand

May 16, 2018

Making HPC resources readily available and easier to use for scientists who may have less HPC expertise is an ongoing challenge. Open OnDemand is a project by t Read more…

By John Russell

PRACE 2017 Annual Report: Exascale Aspirations; Industry Collaboration; HPC Training

May 15, 2018

The Partnership for Advanced Computing in Europe (PRACE) today released its annual report showcasing 2017 activities and providing a glimpse into thinking about Read more…

By John Russell

US Forms AI Brain Trust

May 11, 2018

Amid calls for a U.S. strategy for promoting AI development, the Trump administration is forming a senior-level panel to help coordinate government and industry research efforts. The Select Committee on Artificial Intelligence was announced Thursday (May 10) during a White House summit organized by the Office of Science and Technology Policy (OSTP). Read more…

By George Leopold

Emerging Advanced Scale Tech Trends Focus of Annual Tabor Conference

May 9, 2018

At Tabor Communications' annual Advanced Scale Forum (ASF) held this week in Austin, the focus was on enterprise adoption of HPC-class technologies and high performance data analytics (HPDA). It’s a confab that brings together end users (CIOs, IT planners, department heads) and vendors and encourages... Read more…

By the Editorial Team

Google I/O 2018: AI Everywhere; TPU 3.0 Delivers 100+ Petaflops but Requires Liquid Cooling

May 9, 2018

All things AI dominated discussion at yesterday’s opening of Google’s I/O 2018 developers meeting covering much of Google's near-term product roadmap. The e Read more…

By John Russell

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

Leading Solution Providers

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

HPC and AI – Two Communities Same Future

January 25, 2018

According to Al Gara (Intel Fellow, Data Center Group), high performance computing and artificial intelligence will increasingly intertwine as we transition to Read more…

By Rob Farber

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

HPE Wins $57 Million DoD Supercomputing Contract

February 20, 2018

Hewlett Packard Enterprise (HPE) today revealed details of its massive $57 million HPC contract with the U.S. Department of Defense (DoD). The deal calls for HP Read more…

By Tiffany Trader

CFO Steps down in Executive Shuffle at Supermicro

January 31, 2018

Supermicro yesterday announced senior management shuffling including prominent departures, the completion of an audit linked to its delayed Nasdaq filings, and Read more…

By John Russell

Deep Learning Portends ‘Sea Change’ for Oil and Gas Sector

February 1, 2018

The billowing compute and data demands that spurred the oil and gas industry to be the largest commercial users of high-performance computing are now propelling Read more…

By Tiffany Trader

Nvidia Ups Hardware Game with 16-GPU DGX-2 Server and 18-Port NVSwitch

March 27, 2018

Nvidia unveiled a raft of new products from its annual technology conference in San Jose today, and despite not offering up a new chip architecture, there were still a few surprises in store for HPC hardware aficionados. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This