Alternative Supercomputing or How to Misuse a Computer

By Tiffany Trader

July 14, 2016

In 2008, the IBM Roadrunner supercomputer broke the petaflops barrier using the power of the heterogeneous Sony Cell Broadband Engine (BE) processor. A year prior, the Cell BE had already made its way into the consumer market as the engine inside the SonyPlaystation 3. The PS3’s accelerated design, Linux-capability and low price point inspired several organizations, including the United States Air Force Research Laboratory, to build Linux clusters out of the gaming machines. While the 2010 AFRL “Condor” deployment was the largest of these efforts, stringing together 1,760 Sony PlayStation 3 boxes for an estimated 500 teraflops of performance, the PS3 cluster effort begun at the University of Massachusetts Dartmouth in 2007, is likely the longest-running.

Gaurav Khanna, a professor in the physics department at UMass Dartmouth, built his first Cell-based cluster with eight PS3s in 2007. The “PS3 Gravity Grid” was used to perform research grade simulations of black hole systems and was the first cluster to generate published scientific results. With support from Sony, the cluster was later expanded to 16 PS3s. In 2014, the Dartmouth team, under the direction of Khanna, created yet another cluster out of 308 Sony PS3 gaming consoles and a refrigerated shipping container using hardware donated from the AFRL effort. Khanna reports that these PS3s are still being used and are delivering performance and performance-per-watt on par with unaccelerated Xeon boxes. UMass Dartmouth is a net energy producer, which makes using this older silicon more feasible than it otherwise might be. More on this to follow.

gkhanna headshot
Gaurav Khanna

After Sony locked down the PS3 OS in 2010 in response to a hacking incident, much of the enthusiasm for the gaming-based clusters fell by the wayside, but Khanna still champions using low-cost consumer (and now mobile) hardware for scientific computing.

Recently the professor began looking into other technologies in the same spirit of the PlayStation 3, which aside from providing efficient number-crunching had the economics of being a mass-manufactured consumer device with a heavily discounted pricing model. On account of competition with Microsoft, Sony was selling the PS3s for about half what it cost to make them.

“I’m interested in a cheap device that’s mass manufactured that’s reliable and is high-performance,” says Khanna, “And I think that naturally brings you to two things: video gaming cards like NVIDIA GeForce or the AMD Radeons and mobile chips such as ARM.”

Khanna studied the SBC (single board computing) space including the Raspberry Pi for a suitable platform. “While it’s true that the Pis sip power,” he said, “being in the few hundred megaflops range each, you would have to have so many of them with so many power supplies, cables, network cards and switches to get some substantial performance that it’s just not worth the hassle.”

A couple years ago he began experimenting with using AMD Radeon cards to crunch some of his astrophysics codes. With the assistance of students, he had already created OpenCL versions and CUDA versions of his codes, and the Radeon of course supports OpenCL. He reports being impressed by the performance and reliability of the cards. Of the roughly two dozen cards crunching scientific work for the last two years, basically non-stop 24/7, there’s only been two or three that failed, he says. The latest version cards he’s acquired are the Radeon R9 Fury X, which provide 8.6 teraflops of single-precision floating point computing power and 512 GB/s of memory bandwidth for about $460.

Khanna, a theoretical physicist cum computational scientist who studies the internal workings of black holes and uses theory and computer simulations to predict gravitational wave radiation, says it’s his research into understanding black holes that has benefited most from his use of consumer-class silicon.

“I’m very interested in what happens inside a black hole. There has been a fair amount of work on that over the decades, but a lot of questions still remain on what actually happens if you’re inside a black hole and what kind of effects you could expect to observe and expect to feel and whether there’s a singularity event that happens,” says Khanna. “Those are the kind of codes that I mapped onto the PS3 architecture.”

He explains that because his original code was optimized for the Cell, the move to GPUs was straightforward. “The break up [in the code] was similar,” he says. “You’ve got the part that needs the I/O done, the CPU, and you’ve got the parallel part that’s going to be done on the GPU, or the Synergistic Processing Elements (SPEs) in the case of the Cell.”

The codes are well-suited for a FP32-dominant processor although parts do need FP64, so Khanna employs mixed precision. At a one-sixteenth ratio, the AMD R9 Fury X card has only about a half teraflops of spec’d FP64 performance, but Khanna says this is sufficient for his needs.

“From a cost perspective, I think the AMD cards are a no brainer – for under $500, you get a few teraflops of performance on this consumer device – while the high-end NVIDIA Tesla products [specifically targeted at HPC] go for a few thousand each,” he says.

Budget realities are what got Khanna started on the path to misusing compute for science as he puts it. “Theoretical physics is an esoteric science without direct implication on high-priority areas like public health, energy and so forth, so has been an underfunded area,” he says. “We do the most with the resources we have – and that has been a primary driver for pretty much my entire career — to find creative ways to do what we need to do but do it cheaply.”

“I think the main reason why more people haven’t leveraged these AMD Radeon cards for compute is because people are so comfortable with CUDA,” Khanna opines, “but once you have CUDA version it’s not that difficult to develop an OpenCL version. It’s a bit more complex, but if you have a CUDA version I would say you’re most of the way there already. Then there’s also the new tools from AMD that help you switch back and forth.”

UMass Dartmouth Elroy2While he was experimenting with the Radeon gaming boards, Khanna also wanted to implement a cluster with a mobile platform. He was looking for something sufficiently powerful yet energy-efficient with support for either CUDA or OpenCL. “If you keep those constraints in line, you find there are really two nice viable platforms – one is the NVIDIA Tegra, which of course supports CUDA, and the other is ODROID boards, developed by Hardkernel, a South Korean purveyor of open-source hardware. The boards use Samsung processors and an ARM Mali GPU that supports OpenCL.”

Khanna ended up going with the Tegra X1 series SoC from NVIDIA, in part because he had several colleagues whose codes were better suited for the CUDA framework. He was also impressed with a stated peak performance (single-precision) of 512 gigaflops per card.

In May, UMass Dartmouth’s Center for Scientific Computing & Visualization Research (CSCVR) purchased 32 of these cards at roughly a 50 percent discount from NVIDIA. The total performance of the new cluster, dubbed “Elroy,” is a little over 16 teraflops and it draws only 300 watts of power.

“In terms of power efficiency, the spec’d numbers turn out to about 50 gigaflops per watt,” he says, sticking with the FP32 metrics. “If you compare with a traditional cluster, it would be a few gigaflops per watt for a CPU-only architecture. A cluster with GPUs consumes about 10-15 gigaflops per watt. So we’re talking up to five times more performance-per-watt than your typical GPU-accelerated computer, which is what I was hoping for. In mobile devices, people want longer battery life so a lot of innovation is going into performance-per-watt.”

At this time, Khanna has his codes up and running on Elroy and has completed some benchmarking studies, which he and a colleague detail in their new paper, Scientific Computing Using Consumer Video Gaming Hardware Devices. (The paper has yet to be published but a pre-print copy is available here.)

The move to the Tegra-X1 platform was an easier transition than the Cell, reports Khanna, because, well, Fortran. “Even though the Cell could run Fortran, and you could use Fortran to run the code on the accelerator cores, there was no bridging possibility,” he explains. “To bring the communication between the two, you had to go to C. So I actually had to write this fairly funky C-based bridge code just to be able to have the two devices communicate in flight. It was ugly.”

“A student that was working on the GPU port rewrote the entire code in C,” he says. “So now we’ve had for a few years a C/C++ code that works much better than this old Fortran code bridged with C, which was kind of a mess.”

Although he’s very pleased with the performance and energy-efficiency metrics of the Tegra-based “Elroy,” Khanna doesn’t think the mobile device experiment, as he refers to it, was that advantageous from a cost perspective. Although he received a nice discount, the boards have a full sticker price of around $600 each while the ODROID boards are $60 and offer about one-fifth the FP32 performance, so potentially a 2X performance per dollar savings. Of course, peak floating point performance does not tell the whole story, but Khanna is optimistic about the ODROID prospects.

“I think if we had done the ODROIDs instead, that would have been more attractive from a cost perspective, and in fact I think we are going to build a cluster with those ODROID Samsung boards as well for comparison’s sake,” he shares.

Khanna maintains that with the right components he can achieve a factor of five or better on performance per watt and performance per dollar over more traditional server silicon. “All we’re doing is misusing these platforms to do constructive science,” he says.

Free Power, Free Space

One reason Khanna has been able to hold on to older architectures rather than having to rip and replace is due to the rather unique power and datacenter situation at UMass Dartmouth. Being on the south coast of Massachusetts, UMass Dartmouth benefits from ample wind power and a natural gas co-generation facility. The campus recently became power sufficient but cannot sell power back to the grid – which means it actually generates an excess of power.

“So power actually is free on our campus and we are lucky in that way,” Khanna acknowledges. “Cooling ties into that equation as well since there is sufficient energy for the task.”

As for space, virtualization has freed up the IT footprint in the datacenter. It used to occupy an entire datacenter, but now the university’s student services now runs off a very small virtual cluster. “Our datacenter has slowly transformed itself into a research computing datacenter and that’s where all my hardware goes,” adds Khanna. “As they free up space, I get a chance to access it.”

The Power of Experimentation

A constrained budget wasn’t the only thing motivating Khanna to pursue alternative supercomputing platforms; he was also driven by a strong belief in the benefits of local compute resources. He says that the original PS3 cluster effort took place at a time when Teragrid (the precursor to XSEDE) was experiencing large fluctuations in terms of supply and demand. “Demand was far outpacing supply at the time,” he reports. “When you did have time, and you submitted a job, there were long wait times. It was getting to the point that jobs took longer in the queue than they took to run and that’s not a productive way to function – you want those times to at least be comparable if not have the queue time be less. I started to think about what is the best way to build my own cluster locally, cheaply.”

The situation improved when the NSF built Blue Waters and added additional systems to the XSEDE infrastructure. It was the GPU-heavy Keeneland project clusters at Georgia Tech that Khanna got the most mileage from. When those systems were retired in April 2015 after 5.5 years of service, Khanna was motivated to start searching for a cheap GPU or mobile board that would satisfy his need for cheap accelerated local compute.

Says Khanna, “The reliance on something like federal supercomputing sites is not a great way – at least in my experience – to be productive in the long run because it varies so much based on what’s available and what the demand is. It’s good to have local resources and that’s one of the drivers for me doing this here is to have independent resources. That has enabled me and my colleagues and me to do things we really couldn’t do before.”

It would take months to run certain models on the shared systems, according to Khanna, including wait times, and now they can do them in an hour or even a few minutes on local resources. He adds that it’s also very useful for the students to have a local machine because it encourages experimentation and skill development.

“One thing that I feel really painful about the shared federal sites is you get some time – and because your time is so budgeted – there is a disincentive to try different things, to experiment. Plus typically you don’t get what you ask for, you get usually half or three-fourths of what you wanted. You never want to be in a position in your research to not be able to just mess around. I want to see what happens if I just tweak different parameters. If you get to the point where you start thinking ‘is this run worth submitting,’ that’s where I think scientific productivity goes down,” he says.

“Often times you make discoveries in science when you have made a mistake or you were just for the sake of trying something, something bizarre and you learn something from that. Or your code crashes and you make mistakes and you learn something interesting from the forensics. If you are constantly worrying about is this going to cost me, is this worth the submission, am I going to lose too much supercomputing time for this job – I think is a detriment to doing good science. That’s where I feel that our local experimental clusters are hugely useful.

“For production level applications and code, when you’ve got something running and you want to run a thousand cases, I think it’s perfectly fine to use supercomputing time at a larger facility, but you don’t have the luxury there to just mess around and that’s where I think a lot of real science happens.”

Khanna takes the same issue with the cloud model, which has the added barrier of a pay wall. “While it’s great for production research, I feel that is discouraging for science,” he says.

The physics professor says he’s witnessed a shift toward more of a business model of accountability with shared resources and the cloud being increasingly viewed by funding bodies as a way to manage flat budgets. Khanna is doing what he can to buck this trend. “Of course all campuses are budget-constrained and are directing users toward the cloud with pay-per-use or toward shared federal resources but as much as I can push back, I will because the ability to just harmlessly experiment is so important for real science; if you omit that you lose a lot and I hope that doesn’t happen.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

NOAA Announces Major Upgrade to Ensemble Forecast Model, Extends Range to 35 Days

September 23, 2020

A bit over a year ago, the United States’ Global Forecast System (GFS) received a major upgrade: a new dynamical core – its first in 40 years – called the finite-volume cubed-sphere, or FV3. Now, the National Oceanic and Atmospheric Administration (NOAA) is bringing the FV3 dynamical core to... Read more…

By Oliver Peckham

AI Silicon Startup Graphcore Launches Channel Partner Program

September 23, 2020

AI compute platform vendor Graphcore has launched its first formal global channel partner program to promote and boost the sales of its AI processors and blade computing products. The formalized, all-new Graphcore Elite Partner Program follows the company’s past history of working with several... Read more…

By Todd R. Weiss

Arm Targets HPC with New Neoverse Platforms

September 22, 2020

UK-based semiconductor design company Arm today teased details of its Neoverse roadmap, introducing V1 (codenamed Zeus) and N2 (codenamed Perseus), Arm’s second generation N-series platform. The chip IP vendor said the new platforms will deliver 50 percent and 40 percent more... Read more…

By Tiffany Trader

Microsoft’s Azure Quantum Platform Now Offers Toshiba’s ‘Simulated Bifurcation Machine’

September 22, 2020

While pure-play quantum computing (QC) gets most of the QC-related attention, there’s also been steady progress adapting quantum methods for select use on classical computers. Today, Microsoft announced that Toshiba’ Read more…

By John Russell

Oracle Cloud Deepens HPC Embrace with Launch of A100 Instances, Plans for Arm, More 

September 22, 2020

Oracle Cloud Infrastructure (OCI) continued its steady ramp-up of HPC capabilities today with a flurry of announcements. Topping the list is general availability of instances with Nvidia’s newest GPU, the A100. OCI als Read more…

By John Russell

AWS Solution Channel

The Water Institute of the Gulf runs compute-heavy storm surge and wave simulations on AWS

The Water Institute of the Gulf (Water Institute) runs its storm surge and wave analysis models on Amazon Web Services (AWS)—a task that sometimes requires large bursts of compute power. Read more…

Intel® HPC + AI Pavilion

Berlin Institute of Health: Putting HPC to Work for the World

Researchers from the Center for Digital Health at the Berlin Institute of Health (BIH) are using science to understand the pathophysiology of COVID-19, which can help to inform the development of targeted treatments. Read more…

IBM, CQC Enable Cloud-based Quantum Random Number Generation

September 21, 2020

IBM and Cambridge Quantum Computing (CQC) have partnered to achieve progress on one of the major business aspirations for quantum computing – the goal of generating verified, truly random numbers that can be used for a Read more…

By Todd R. Weiss

NOAA Announces Major Upgrade to Ensemble Forecast Model, Extends Range to 35 Days

September 23, 2020

A bit over a year ago, the United States’ Global Forecast System (GFS) received a major upgrade: a new dynamical core – its first in 40 years – called the finite-volume cubed-sphere, or FV3. Now, the National Oceanic and Atmospheric Administration (NOAA) is bringing the FV3 dynamical core to... Read more…

By Oliver Peckham

Arm Targets HPC with New Neoverse Platforms

September 22, 2020

UK-based semiconductor design company Arm today teased details of its Neoverse roadmap, introducing V1 (codenamed Zeus) and N2 (codenamed Perseus), Arm’s second generation N-series platform. The chip IP vendor said the new platforms will deliver 50 percent and 40 percent more... Read more…

By Tiffany Trader

Oracle Cloud Deepens HPC Embrace with Launch of A100 Instances, Plans for Arm, More 

September 22, 2020

Oracle Cloud Infrastructure (OCI) continued its steady ramp-up of HPC capabilities today with a flurry of announcements. Topping the list is general availabilit Read more…

By John Russell

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

Future of Fintech on Display at HPC + AI Wall Street

September 17, 2020

Those who tuned in for Tuesday's HPC + AI Wall Street event got a peak at the future of fintech and lively discussion of topics like blockchain, AI for risk man Read more…

By Alex Woodie, Tiffany Trader and Todd R. Weiss

IBM’s Quantum Race to One Million Qubits

September 15, 2020

IBM today outlined its ambitious quantum computing technology roadmap at its virtual Quantum Summit. The eye-popping million qubit number is still far out, agrees IBM, but perhaps not that far out. Just as eye-popping is IBM’s nearer-term plan for a 1,000-plus qubit system named Condor... Read more…

By John Russell

Nvidia Commits to Buy Arm for $40B

September 14, 2020

Nvidia is acquiring semiconductor design company Arm Ltd. for $40 billion from SoftBank in a blockbuster deal that catapults the GPU chipmaker to a dominant position in the datacenter while helping troubled SoftBank reverse its financial woes. The deal, which has been rumored for... Read more…

By Todd R. Weiss and George Leopold

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms

July 28, 2020

Doctors and medical researchers have struggled to pinpoint – let alone explain – the deluge of symptoms induced by COVID-19 infections in patients, and what Read more…

By Oliver Peckham

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Intel’s 7nm Slip Raises Questions About Ponte Vecchio GPU, Aurora Supercomputer

July 30, 2020

During its second-quarter earnings call, Intel announced a one-year delay of its 7nm process technology, which it says it will create an approximate six-month shift for its CPU product timing relative to prior expectations. The primary issue is a defect mode in the 7nm process that resulted in yield degradation... Read more…

By Tiffany Trader

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

Leading Solution Providers

Contributors

Oracle Cloud Infrastructure Powers Fugaku’s Storage, Scores IO500 Win

August 28, 2020

In June, RIKEN shook the supercomputing world with its Arm-based, Fujitsu-built juggernaut: Fugaku. The system, which weighs in at 415.5 Linpack petaflops, topp Read more…

By Oliver Peckham

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

DOD Orders Two AI-Focused Supercomputers from Liqid

August 24, 2020

The U.S. Department of Defense is making a big investment in data analytics and AI computing with the procurement of two HPC systems that will provide the High Read more…

By Tiffany Trader

Microsoft Azure Adds A100 GPU Instances for ‘Supercomputer-Class AI’ in the Cloud

August 19, 2020

Microsoft Azure continues to infuse its cloud platform with HPC- and AI-directed technologies. Today the cloud services purveyor announced a new virtual machine Read more…

By Tiffany Trader

Japan’s Fugaku Tops Global Supercomputing Rankings

June 22, 2020

A new Top500 champ was unveiled today. Supercomputer Fugaku, the pride of Japan and the namesake of Mount Fuji, vaulted to the top of the 55th edition of the To Read more…

By Tiffany Trader

Joliot-Curie Supercomputer Used to Build First Full, High-Fidelity Aircraft Engine Simulation

July 14, 2020

When industrial designers plan the design of a new element of a vehicle’s propulsion or exterior, they typically use fluid dynamics to optimize airflow and in Read more…

By Oliver Peckham

Intel Speeds NAMD by 1.8x: Saves Xeon Processor Users Millions of Compute Hours

August 12, 2020

Potentially saving datacenters millions of CPU node hours, Intel and the University of Illinois at Urbana–Champaign (UIUC) have collaborated to develop AVX-512 optimizations for the NAMD scalable molecular dynamics code. These optimizations will be incorporated into release 2.15 with patches available for earlier versions. Read more…

By Rob Farber

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This