Servers Headed to Junkyard Find 2nd Life Fighting Cancer in Clusters

By Tiffany Trader

March 20, 2020

Ottawa-based charitable organization Cancer Computer is on a mission to stamp out cancer and other life-threatening diseases, including coronavirus, by putting to good use discarded high-end computer hardware that would otherwise end up in landfills. Their tagline is “a cure could be waiting in line,” and their objective is to point as many cycles as possible at eliminating computational wait times that impede medical advances.

Since its founding five years ago, Cancer Computer has amassed 14,300 cores, in-line with the computing capacity of a mid-sized university, for researchers across the United States and Canada. In a typical scenario, Cancer Computer gets hardware from a corporate partner that’s upgrading their own hardware and supports the mission of helping advance omics-based research.

Founder and CTO Roy Chartier – who also started a for-profit company, Canada HPC Corp., earlier this year – said the inspiration for Cancer Computer came to him when he realized there was a dearth of resources for research computing, and as the need for computing grows, the gap was only getting worse. He chose cancer as a focus because of its lethality and because much of the research lends itself to high-throughput and high-performance computing. It’s a disease with broad impact – one-quarter of people will get a diagnosis in their lifetime. (Chartier himself told us he lost two people close to him to cancer).

But Cancer Computer doesn’t just focus on cancer, it also supports neuroscience research, contributing spare cycles for protein structure prediction via the Rosetta@home project (for which it is the #1 supporting org). And now Cancer Computer has joined in the global fight against the coronavirus pandemic by directing all available cycles to Rosetta’s COVID-19 project that assists scientists at the University of Washington’s Institute for Protein Design in Seattle. Similar to the Folding@home crowdsourcing coronavirus research project (see related story), Rosetta@home is modeling SARS-CoV2 protein interactions with potential drug targets.

When I spoke with Chartier in February, he was excited about a recent large donation, a tranche of 400 servers from the Canadian government. Chartier rattled off a number of sites where Cancer Computer has deployed its donated hardware: the University of Illinois at Urbana-Champaign, Indiana University, the University of Utah, Queens University in Kingston, ON, and McGill University, Montreal, to name a few. There are a number of private sites as well. As you’d expect, having been in production a few years, most of the donated hardware is Intel-based, but among the thousand-or-so nodes put into service, there are a couple dozen AMD servers and several GPU racks. Chartier said he’s interested in getting more AMD gear, which he says has demonstrated good results on some of the bio-benchmarks, like GROMACS.

Typically, Cancer Computer allocates 75 percent of its donated resources to the host institution with the remaining 25 percent dedicated specifically for the organization’s charitable goals through a number of projects. These include Open Science Grid, XSEDE, as well as Boinc-based distributed research networks, the World Community Grid and Rosetta@home. Cancer Computer also fields specific requests for researchers who do not have other resources available to them.

Cancer Computer’s donated servers are deployed either at a host institution or in a colocation facility operated by Cancer Computer or a partner. As the organization scales, Chartier would like to grow the colo side, with an eye to sites in green-power regions, including Quebec and Ontario, rich in hydro-electric power, and possibly geo-thermal powered Iceland. Certain workloads, such as ones involving clinical data, mandate the need to comply with HIPAA or PHIPA (Canada’s version of HIPAA), which can only be guaranteed in a commercial datacenter.

As a charity, funding is a constant challenge. Although the computers are donated and the staff are volunteers, there are still expenses: replacement hard drives, SSDs, RAM, switches and rails, as well as travel expenses for on-site installations. There is right now a concerted effort at Cancer Computer to build up their board and secure corporate sponsorships in order to scale and be more sustainable. A near-term goal is to employ one or two full-time techs and to implement cost recovery measures.

“We find people that we ask [to be involved] and they’re very passionate about it; they’re willing to help where they can, so it’s just a matter of finding the right people, the right institutions, the right projects, and the right donors, you know, the people who want to support you,” said Chartier.

Supercomputing has a history of giving decommissioned systems a new lease on life. This includes the high-profile donation of TACC’s Ranger system to universities in South Africa, as well as the UCSD Gordon system put into service at Simons Foundation’s Flatiron Institute in New York. But some donations you probably haven’t heard of. For various reasons, the partners may not want to make it public; often because they don’t want to ruffle the feathers of vendors in the business of selling the next-generation of gear. But with so many good causes needing processing power and the importance of reducing e-waste, there is growing support in the community – vendors included – for extending the life of systems that would otherwise end up in a landfill.

“The whole thing comes down to open science, right?” said Chartier. “Open science and sharing of data, sharing of research. If we get, let’s say, two or three more sites, and we had a constant inflow of gear, and we had enough money to be able to have technicians clean it, update the firmware and ship it to these locations, we continue to develop this international e-infrastructure, and make it sustainable and much bigger – I mean, no matter how much money you throw at a problem, particularly like cancer, you know, there’s always room for more.”

“We don’t want to compete with the vendors,” he added. “But if there’s usable, secondhand gear that’s being thrown out, my goodness, that’s definitely something that that really shouldn’t happen.”

A number of prominent organizations agree. At the University of Illinois at Urbana-Champaign, Cancer Computer’s deployment of 300 servers supports the work of more than 500 researchers per year. Cancer Computer is in the process of installing a high-throughput cluster at McGill comprised of some 400 servers. Indiana University is another high-profile site; the partners recently extended a three-year relationship.

For donations, Cancer Computer only accepts gear that’s less than 10 years old. On the compute side, Cancer Computer looks for Ivy Bridge processors or better; for storage it can go back a generation or two. The better equipment gets put into production, and the charity is also building an internal system with plans to assist partner universities with code development and the building of in-house applications.

In most cases, the hardware that Cancer Computer gets is at the end of its support contract. “We can give it a second life. Outside of a DMZ, the servers can run lots of workloads if there’s no personal data, and you don’t typically need the same levels of security that you would require with warrantied gear. Even if 10-20-30 percent of your drives are dead, you can set up your storage to be distributed and you can run, no problem. HPC can be engineered and architected in such a way to do that. It’s not a five-nines scenario,” he said.

Canada HPC

Chartier also updated us on Canada HPC, formed when some of the volunteers from Cancer Computer saw a commercial opportunity for workloads outside the cancer research space, as Cancer Computer’s mission is to provide compute resources to researchers either free where possible, or heavily discounted on a cost-recovery basis.

Earlier this month, Canada HPC signed a partnership agreement with Dell. As a solutions provider for Dell, Canada HPC will do everything from the rack-and-stack, configuration, deploying the scheduling tools, up to and including support. Chartier explained that Dell saw a need and reached out to them – and Canada HPC had the necessary Canadian federal government security clearances.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

ISC21 Cluster Competition Bracketology

June 18, 2021

For the first time ever, cluster competition experts have gathered together for an actual seeding reveal for the ISC21 Student Cluster Competition. What’s this, you ask? It’s where bona fide student cluster competi Read more…

OSC Enables On-Demand HPC for Automotive Engineering Firm

June 18, 2021

In motorsports, vehicle designers are constantly looking for the tiniest sliver of time to shave off through some clever piece of engineering – but as the low-hanging fruit gets snatched up, those advances are getting Read more…

PNNL Researchers Unveil Tool to Accelerate CGRA Development

June 18, 2021

Moore’s law is in decline due to the physical limits of transistor chips, putting an expiration date on a hitherto-perennial exponential trend in computing power – and leaving hardware developers scrambling to contin Read more…

TU Wien Announces VSC-5, Austria’s Most Powerful Supercomputer

June 17, 2021

Austria is getting a new top supercomputer: VSC-5, the latest iteration of the Vienna Scientific Cluster. The news was announced by VSC-5’s soon-to-be home, TU Wien (also known as the Vienna University of Technology). Read more…

Supercomputing Helps Advance Hydrogen Energy Research

June 16, 2021

Hydrogen energy has long remained an elusive target of the renewable energy industry, promising clean, carbon-free energy that would allow for rapid refueling, unlike current battery-based electric vehicles. Hydrogen-bas Read more…

AWS Solution Channel

Accelerating research and development for new medical treatments

Today, more than 290,000 researchers in France are working to provide better support and care for patients through modern medical treatment. To fulfill their mission, these researchers must be equipped with powerful tools. Read more…

FF4EuroHPC Initiative Highlights Results of First Open Call

June 16, 2021

EuroHPC is kicking into high gear, with seven of its first eight systems detailed – and one of them already operational. While the systems are, perhaps, the flashiest endeavor of the European Commission’s HPC effort, Read more…

TU Wien Announces VSC-5, Austria’s Most Powerful Supercomputer

June 17, 2021

Austria is getting a new top supercomputer: VSC-5, the latest iteration of the Vienna Scientific Cluster. The news was announced by VSC-5’s soon-to-be home, T Read more…

Catching up with ISC 2021 Digital Program Chair Martin Schulz

June 16, 2021

Leibniz Research Centre (LRZ)’s content creator Susanne Vieser interviews ISC 2021 Digital Program Chair, Prof. Martin Schulz to gain an understanding of his ISC affiliation, which is outside his usual scope of work at the research center and the Technical University of Munich. Read more…

Intel Debuts ‘Infrastructure Processing Unit’ as Part of Broader XPU Strategy

June 15, 2021

To boost the performance of busy CPUs hosted by cloud service providers, Intel Corp. has launched a new line of Infrastructure Processing Units (IPUs) that take Read more…

ISC Keynote: Glimpse into Microsoft’s View of the Quantum Computing Landscape

June 15, 2021

Looking for a dose of reality and realistic optimism about quantum computing? Matthias Troyer, Microsoft distinguished scientist, plans to do just that in his ISC2021 keynote in two weeks – Quantum Computing: From Academic Research to Real-world Applications. He notes wryly that classical... Read more…

A Carbon Crisis Looms Over Supercomputing. How Do We Stop It?

June 11, 2021

Supercomputing is extraordinarily power-hungry, with many of the top systems measuring their peak demand in the megawatts due to powerful processors and their c Read more…

Honeywell Quantum and Cambridge Quantum Plan to Merge; More to Follow?

June 10, 2021

Earlier this week, Honeywell announced plans to merge its quantum computing business, Honeywell Quantum Solutions (HQS), which focuses on trapped ion hardware, Read more…

ISC21 Keynoter Xiaoxiang Zhu to Deliver a Bird’s-Eye View of a Changing World

June 10, 2021

ISC High Performance 2021 – once again virtual due to the ongoing pandemic – is swiftly approaching. In contrast to last year’s conference, which canceled Read more…

Xilinx Expands Versal Chip Family With 7 New Versal AI Edge Chips

June 10, 2021

FPGA chip vendor Xilinx has been busy over the last several years cranking out its Versal AI Core, Versal Premium and Versal Prime chip families to fill customer compute needs in the cloud, datacenters, networks and more. Now Xilinx is expanding its reach to the booming edge... Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base cl Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Berkeley Lab Debuts Perlmutter, World’s Fastest AI Supercomputer

May 27, 2021

A ribbon-cutting ceremony held virtually at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC) today marked the official launch of Perlmutter – aka NERSC-9 – the GPU-accelerated supercomputer built by HPE in partnership with Nvidia and AMD. Read more…

Google Launches TPU v4 AI Chips

May 20, 2021

Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

Iran Gains HPC Capabilities with Launch of ‘Simorgh’ Supercomputer

May 18, 2021

Iran is said to be developing domestic supercomputing technology to advance the processing of scientific, economic, political and military data, and to strengthen the nation’s position in the age of AI and big data. On Sunday, Iran unveiled the Simorgh supercomputer, which will deliver.... Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers


10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from I Read more…

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

Microsoft to Provide World’s Most Powerful Weather & Climate Supercomputer for UK’s Met Office

April 22, 2021

More than 14 months ago, the UK government announced plans to invest £1.2 billion ($1.56 billion) into weather and climate supercomputing, including procuremen Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

  • arrow
  • Click Here for More Headlines
  • arrow