Alternative Supercomputing or How to Misuse a Computer

By Tiffany Trader

July 14, 2016

In 2008, the IBM Roadrunner supercomputer broke the petaflops barrier using the power of the heterogeneous Sony Cell Broadband Engine (BE) processor. A year prior, the Cell BE had already made its way into the consumer market as the engine inside the SonyPlaystation 3. The PS3’s accelerated design, Linux-capability and low price point inspired several organizations, including the United States Air Force Research Laboratory, to build Linux clusters out of the gaming machines. While the 2010 AFRL “Condor” deployment was the largest of these efforts, stringing together 1,760 Sony PlayStation 3 boxes for an estimated 500 teraflops of performance, the PS3 cluster effort begun at the University of Massachusetts Dartmouth in 2007, is likely the longest-running.

Gaurav Khanna, a professor in the physics department at UMass Dartmouth, built his first Cell-based cluster with eight PS3s in 2007. The “PS3 Gravity Grid” was used to perform research grade simulations of black hole systems and was the first cluster to generate published scientific results. With support from Sony, the cluster was later expanded to 16 PS3s. In 2014, the Dartmouth team, under the direction of Khanna, created yet another cluster out of 308 Sony PS3 gaming consoles and a refrigerated shipping container using hardware donated from the AFRL effort. Khanna reports that these PS3s are still being used and are delivering performance and performance-per-watt on par with unaccelerated Xeon boxes. UMass Dartmouth is a net energy producer, which makes using this older silicon more feasible than it otherwise might be. More on this to follow.

gkhanna headshot
Gaurav Khanna

After Sony locked down the PS3 OS in 2010 in response to a hacking incident, much of the enthusiasm for the gaming-based clusters fell by the wayside, but Khanna still champions using low-cost consumer (and now mobile) hardware for scientific computing.

Recently the professor began looking into other technologies in the same spirit of the PlayStation 3, which aside from providing efficient number-crunching had the economics of being a mass-manufactured consumer device with a heavily discounted pricing model. On account of competition with Microsoft, Sony was selling the PS3s for about half what it cost to make them.

“I’m interested in a cheap device that’s mass manufactured that’s reliable and is high-performance,” says Khanna, “And I think that naturally brings you to two things: video gaming cards like NVIDIA GeForce or the AMD Radeons and mobile chips such as ARM.”

Khanna studied the SBC (single board computing) space including the Raspberry Pi for a suitable platform. “While it’s true that the Pis sip power,” he said, “being in the few hundred megaflops range each, you would have to have so many of them with so many power supplies, cables, network cards and switches to get some substantial performance that it’s just not worth the hassle.”

A couple years ago he began experimenting with using AMD Radeon cards to crunch some of his astrophysics codes. With the assistance of students, he had already created OpenCL versions and CUDA versions of his codes, and the Radeon of course supports OpenCL. He reports being impressed by the performance and reliability of the cards. Of the roughly two dozen cards crunching scientific work for the last two years, basically non-stop 24/7, there’s only been two or three that failed, he says. The latest version cards he’s acquired are the Radeon R9 Fury X, which provide 8.6 teraflops of single-precision floating point computing power and 512 GB/s of memory bandwidth for about $460.

Khanna, a theoretical physicist cum computational scientist who studies the internal workings of black holes and uses theory and computer simulations to predict gravitational wave radiation, says it’s his research into understanding black holes that has benefited most from his use of consumer-class silicon.

“I’m very interested in what happens inside a black hole. There has been a fair amount of work on that over the decades, but a lot of questions still remain on what actually happens if you’re inside a black hole and what kind of effects you could expect to observe and expect to feel and whether there’s a singularity event that happens,” says Khanna. “Those are the kind of codes that I mapped onto the PS3 architecture.”

He explains that because his original code was optimized for the Cell, the move to GPUs was straightforward. “The break up [in the code] was similar,” he says. “You’ve got the part that needs the I/O done, the CPU, and you’ve got the parallel part that’s going to be done on the GPU, or the Synergistic Processing Elements (SPEs) in the case of the Cell.”

The codes are well-suited for a FP32-dominant processor although parts do need FP64, so Khanna employs mixed precision. At a one-sixteenth ratio, the AMD R9 Fury X card has only about a half teraflops of spec’d FP64 performance, but Khanna says this is sufficient for his needs.

“From a cost perspective, I think the AMD cards are a no brainer – for under $500, you get a few teraflops of performance on this consumer device – while the high-end NVIDIA Tesla products [specifically targeted at HPC] go for a few thousand each,” he says.

Budget realities are what got Khanna started on the path to misusing compute for science as he puts it. “Theoretical physics is an esoteric science without direct implication on high-priority areas like public health, energy and so forth, so has been an underfunded area,” he says. “We do the most with the resources we have – and that has been a primary driver for pretty much my entire career — to find creative ways to do what we need to do but do it cheaply.”

“I think the main reason why more people haven’t leveraged these AMD Radeon cards for compute is because people are so comfortable with CUDA,” Khanna opines, “but once you have CUDA version it’s not that difficult to develop an OpenCL version. It’s a bit more complex, but if you have a CUDA version I would say you’re most of the way there already. Then there’s also the new tools from AMD that help you switch back and forth.”

UMass Dartmouth Elroy2While he was experimenting with the Radeon gaming boards, Khanna also wanted to implement a cluster with a mobile platform. He was looking for something sufficiently powerful yet energy-efficient with support for either CUDA or OpenCL. “If you keep those constraints in line, you find there are really two nice viable platforms – one is the NVIDIA Tegra, which of course supports CUDA, and the other is ODROID boards, developed by Hardkernel, a South Korean purveyor of open-source hardware. The boards use Samsung processors and an ARM Mali GPU that supports OpenCL.”

Khanna ended up going with the Tegra X1 series SoC from NVIDIA, in part because he had several colleagues whose codes were better suited for the CUDA framework. He was also impressed with a stated peak performance (single-precision) of 512 gigaflops per card.

In May, UMass Dartmouth’s Center for Scientific Computing & Visualization Research (CSCVR) purchased 32 of these cards at roughly a 50 percent discount from NVIDIA. The total performance of the new cluster, dubbed “Elroy,” is a little over 16 teraflops and it draws only 300 watts of power.

“In terms of power efficiency, the spec’d numbers turn out to about 50 gigaflops per watt,” he says, sticking with the FP32 metrics. “If you compare with a traditional cluster, it would be a few gigaflops per watt for a CPU-only architecture. A cluster with GPUs consumes about 10-15 gigaflops per watt. So we’re talking up to five times more performance-per-watt than your typical GPU-accelerated computer, which is what I was hoping for. In mobile devices, people want longer battery life so a lot of innovation is going into performance-per-watt.”

At this time, Khanna has his codes up and running on Elroy and has completed some benchmarking studies, which he and a colleague detail in their new paper, Scientific Computing Using Consumer Video Gaming Hardware Devices. (The paper has yet to be published but a pre-print copy is available here.)

The move to the Tegra-X1 platform was an easier transition than the Cell, reports Khanna, because, well, Fortran. “Even though the Cell could run Fortran, and you could use Fortran to run the code on the accelerator cores, there was no bridging possibility,” he explains. “To bring the communication between the two, you had to go to C. So I actually had to write this fairly funky C-based bridge code just to be able to have the two devices communicate in flight. It was ugly.”

“A student that was working on the GPU port rewrote the entire code in C,” he says. “So now we’ve had for a few years a C/C++ code that works much better than this old Fortran code bridged with C, which was kind of a mess.”

Although he’s very pleased with the performance and energy-efficiency metrics of the Tegra-based “Elroy,” Khanna doesn’t think the mobile device experiment, as he refers to it, was that advantageous from a cost perspective. Although he received a nice discount, the boards have a full sticker price of around $600 each while the ODROID boards are $60 and offer about one-fifth the FP32 performance, so potentially a 2X performance per dollar savings. Of course, peak floating point performance does not tell the whole story, but Khanna is optimistic about the ODROID prospects.

“I think if we had done the ODROIDs instead, that would have been more attractive from a cost perspective, and in fact I think we are going to build a cluster with those ODROID Samsung boards as well for comparison’s sake,” he shares.

Khanna maintains that with the right components he can achieve a factor of five or better on performance per watt and performance per dollar over more traditional server silicon. “All we’re doing is misusing these platforms to do constructive science,” he says.

Free Power, Free Space

One reason Khanna has been able to hold on to older architectures rather than having to rip and replace is due to the rather unique power and datacenter situation at UMass Dartmouth. Being on the south coast of Massachusetts, UMass Dartmouth benefits from ample wind power and a natural gas co-generation facility. The campus recently became power sufficient but cannot sell power back to the grid – which means it actually generates an excess of power.

“So power actually is free on our campus and we are lucky in that way,” Khanna acknowledges. “Cooling ties into that equation as well since there is sufficient energy for the task.”

As for space, virtualization has freed up the IT footprint in the datacenter. It used to occupy an entire datacenter, but now the university’s student services now runs off a very small virtual cluster. “Our datacenter has slowly transformed itself into a research computing datacenter and that’s where all my hardware goes,” adds Khanna. “As they free up space, I get a chance to access it.”

The Power of Experimentation

A constrained budget wasn’t the only thing motivating Khanna to pursue alternative supercomputing platforms; he was also driven by a strong belief in the benefits of local compute resources. He says that the original PS3 cluster effort took place at a time when Teragrid (the precursor to XSEDE) was experiencing large fluctuations in terms of supply and demand. “Demand was far outpacing supply at the time,” he reports. “When you did have time, and you submitted a job, there were long wait times. It was getting to the point that jobs took longer in the queue than they took to run and that’s not a productive way to function – you want those times to at least be comparable if not have the queue time be less. I started to think about what is the best way to build my own cluster locally, cheaply.”

The situation improved when the NSF built Blue Waters and added additional systems to the XSEDE infrastructure. It was the GPU-heavy Keeneland project clusters at Georgia Tech that Khanna got the most mileage from. When those systems were retired in April 2015 after 5.5 years of service, Khanna was motivated to start searching for a cheap GPU or mobile board that would satisfy his need for cheap accelerated local compute.

Says Khanna, “The reliance on something like federal supercomputing sites is not a great way – at least in my experience – to be productive in the long run because it varies so much based on what’s available and what the demand is. It’s good to have local resources and that’s one of the drivers for me doing this here is to have independent resources. That has enabled me and my colleagues and me to do things we really couldn’t do before.”

It would take months to run certain models on the shared systems, according to Khanna, including wait times, and now they can do them in an hour or even a few minutes on local resources. He adds that it’s also very useful for the students to have a local machine because it encourages experimentation and skill development.

“One thing that I feel really painful about the shared federal sites is you get some time – and because your time is so budgeted – there is a disincentive to try different things, to experiment. Plus typically you don’t get what you ask for, you get usually half or three-fourths of what you wanted. You never want to be in a position in your research to not be able to just mess around. I want to see what happens if I just tweak different parameters. If you get to the point where you start thinking ‘is this run worth submitting,’ that’s where I think scientific productivity goes down,” he says.

“Often times you make discoveries in science when you have made a mistake or you were just for the sake of trying something, something bizarre and you learn something from that. Or your code crashes and you make mistakes and you learn something interesting from the forensics. If you are constantly worrying about is this going to cost me, is this worth the submission, am I going to lose too much supercomputing time for this job – I think is a detriment to doing good science. That’s where I feel that our local experimental clusters are hugely useful.

“For production level applications and code, when you’ve got something running and you want to run a thousand cases, I think it’s perfectly fine to use supercomputing time at a larger facility, but you don’t have the luxury there to just mess around and that’s where I think a lot of real science happens.”

Khanna takes the same issue with the cloud model, which has the added barrier of a pay wall. “While it’s great for production research, I feel that is discouraging for science,” he says.

The physics professor says he’s witnessed a shift toward more of a business model of accountability with shared resources and the cloud being increasingly viewed by funding bodies as a way to manage flat budgets. Khanna is doing what he can to buck this trend. “Of course all campuses are budget-constrained and are directing users toward the cloud with pay-per-use or toward shared federal resources but as much as I can push back, I will because the ability to just harmlessly experiment is so important for real science; if you omit that you lose a lot and I hope that doesn’t happen.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

HPC Career Notes: August 2021 Edition

August 4, 2021

In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high-performance computing community. Whether it’s a promotion, new company hire, or even an accolade, we’ Read more…

The Promise (and Necessity) of Runtime Systems like Charm++ in Exascale Power Management

August 4, 2021

Big heterogeneous computer systems, especially forthcoming exascale computers, are power hungry and difficult to program effectively. This is, of course, not an unrecognized problem. In a recent blog, Charmworks’ CEO S Read more…

Digging into the Atos-Nimbix Deal: Big US HPC and Global Cloud Aspirations. Look out HPE?

August 2, 2021

Behind Atos’s deal announced last week to acquire HPC-cloud specialist Nimbix are ramped-up plans to penetrate the U.S. HPC market and global expansion of its HPC cloud capabilities. Nimbix will become “an Atos HPC c Read more…

Berkeley Lab Makes Strides in Autonomous Discovery to Tackle the Data Deluge

August 2, 2021

Data production is outpacing the human capacity to process said data. Whether a giant radio telescope, a new particle accelerator or lidar data from autonomous cars, the sheer scale of the data generated is increasingly Read more…

Verifying the Universe with Exascale Computers

July 30, 2021

The ExaSky project, one of the critical Earth and Space Science applications being solved by the US Department of Energy’s (DOE’s) Exascale Computing Project (ECP), is preparing to use the nation’s forthcoming exas Read more…

AWS Solution Channel

Pushing pixels, not data with NICE DCV

NICE DCV, our high-performance, low-latency remote-display protocol, was originally created for scientists and engineers who ran large workloads on far-away supercomputers, but needed to visualize data without moving it. Read more…

What’s After Exascale? The Internet of Workflows Says HPE’s Nicolas Dubé

July 29, 2021

With the race to exascale computing in its final leg, it’s natural to wonder what the Post Exascale Era will look like. Nicolas Dubé, VP and chief technologist for HPE’s HPC business unit, agrees and shared his vision at Supercomputing Frontiers Europe 2021 held last week. The next big thing, he told the virtual audience at SFE21, is something that will connect HPC and (broadly) all of IT – into what Dubé calls The Internet of Workflows. Read more…

Digging into the Atos-Nimbix Deal: Big US HPC and Global Cloud Aspirations. Look out HPE?

August 2, 2021

Behind Atos’s deal announced last week to acquire HPC-cloud specialist Nimbix are ramped-up plans to penetrate the U.S. HPC market and global expansion of its Read more…

What’s After Exascale? The Internet of Workflows Says HPE’s Nicolas Dubé

July 29, 2021

With the race to exascale computing in its final leg, it’s natural to wonder what the Post Exascale Era will look like. Nicolas Dubé, VP and chief technologist for HPE’s HPC business unit, agrees and shared his vision at Supercomputing Frontiers Europe 2021 held last week. The next big thing, he told the virtual audience at SFE21, is something that will connect HPC and (broadly) all of IT – into what Dubé calls The Internet of Workflows. Read more…

How UK Scientists Developed Transformative, HPC-Powered Coronavirus Sequencing System

July 29, 2021

In November 2020, the COVID-19 Genomics UK Consortium (COG-UK) won the HPCwire Readers’ Choice Award for Best HPC Collaboration for its CLIMB-COVID sequencing project. Launched in March 2020, CLIMB-COVID has now resulted in the sequencing of over 675,000 coronavirus genomes – an increasingly critical task as variants like Delta threaten the tenuous prospect of a return to normalcy in much of the world. Read more…

IBM and University of Tokyo Roll Out Quantum System One in Japan

July 27, 2021

IBM and the University of Tokyo today unveiled an IBM Quantum System One as part of the IBM-Japan quantum program announced in 2019. The system is the second IB Read more…

Intel Unveils New Node Names; Sapphire Rapids Is Now an ‘Intel 7’ CPU

July 27, 2021

What's a preeminent chip company to do when its process node technology lags the competition by (roughly) one generation, but outmoded naming conventions make it seem like it's two nodes behind? For Intel, the response was to change how it refers to its nodes with the aim of better reflecting its positioning within the leadership semiconductor manufacturing space. Intel revealed its new node nomenclature, and... Read more…

Will Approximation Drive Post-Moore’s Law HPC Gains?

July 26, 2021

“Hardware-based improvements are going to get more and more difficult,” said Neil Thompson, an innovation scholar at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL). “I think that’s something that this crowd will probably, actually, be already familiar with.” Thompson, speaking... Read more…

With New Owner and New Roadmap, an Independent Omni-Path Is Staging a Comeback

July 23, 2021

Put on a shelf by Intel in 2019, Omni-Path faced a uncertain future, but under new custodian Cornelis Networks, OmniPath is looking to make a comeback as an independent high-performance interconnect solution. A "significant refresh" – called Omni-Path Express – is coming later this year according to the company. Cornelis Networks formed last September as a spinout of Intel's Omni-Path division. Read more…

Chameleon’s HPC Testbed Sharpens Its Edge, Presses ‘Replay’

July 22, 2021

“One way of saying what I do for a living is to say that I develop scientific instruments,” said Kate Keahey, a senior fellow at the University of Chicago a Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base cl Read more…

Berkeley Lab Debuts Perlmutter, World’s Fastest AI Supercomputer

May 27, 2021

A ribbon-cutting ceremony held virtually at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC) today marked the official launch of Perlmutter – aka NERSC-9 – the GPU-accelerated supercomputer built by HPE in partnership with Nvidia and AMD. Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer called Dojo to process truly vast amounts of video data. It’s a beast! … A truly useful exaflop at de facto FP32.” Read more…

Google Launches TPU v4 AI Chips

May 20, 2021

Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I Read more…

CentOS Replacement Rocky Linux Is Now in GA and Under Independent Control

June 21, 2021

The Rocky Enterprise Software Foundation (RESF) is announcing the general availability of Rocky Linux, release 8.4, designed as a drop-in replacement for the soon-to-be discontinued CentOS. The GA release is launching six-and-a-half months after Red Hat deprecated its support for the widely popular, free CentOS server operating system. The Rocky Linux development effort... Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Iran Gains HPC Capabilities with Launch of ‘Simorgh’ Supercomputer

May 18, 2021

Iran is said to be developing domestic supercomputing technology to advance the processing of scientific, economic, political and military data, and to strengthen the nation’s position in the age of AI and big data. On Sunday, Iran unveiled the Simorgh supercomputer, which will deliver.... Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Leading Solution Providers

Contributors

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

AMD-Xilinx Deal Gains UK, EU Approvals — China’s Decision Still Pending

July 1, 2021

AMD’s planned acquisition of FPGA maker Xilinx is now in the hands of Chinese regulators after needed antitrust approvals for the $35 billion deal were receiv Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

Microsoft to Provide World’s Most Powerful Weather & Climate Supercomputer for UK’s Met Office

April 22, 2021

More than 14 months ago, the UK government announced plans to invest £1.2 billion ($1.56 billion) into weather and climate supercomputing, including procuremen Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

Q&A with Jim Keller, CTO of Tenstorrent, and an HPCwire Person to Watch in 2021

April 22, 2021

As part of our HPCwire Person to Watch series, we are happy to present our interview with Jim Keller, president and chief technology officer of Tenstorrent. One of the top chip architects of our time, Keller has had an impactful career. Read more…

Frontier to Meet 20MW Exascale Power Target Set by DARPA in 2008

July 14, 2021

After more than a decade of planning, the United States’ first exascale computer, Frontier, is set to arrive at Oak Ridge National Laboratory (ORNL) later this year. Crossing this “1,000x” horizon required overcoming four major challenges: power demand, reliability, extreme parallelism and data movement. Read more…

Senate Debate on Bill to Remake NSF – the Endless Frontier Act – Begins

May 18, 2021

The U.S. Senate today opened floor debate on the Endless Frontier Act which seeks to remake and expand the National Science Foundation by creating a technology Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire