Van Andel Research Optimizes HPC Pipeline with DDN

By John Russell

February 7, 2017

For more than a decade the swelling output from life sciences experimental instruments has been overwhelming research computing infrastructures intended to support them. DNA sequencers were the first – instrument capacities seemed to jump monthly. Today it’s the cryo electron microscope – some of them 13TB a day beasts. Even a well-planned brand-new HPC environment can find itself underpowered by the time it is switched on.

A good example of the challenge and nimbleness required to cope is Van Andel Research Institute’s (VARI) initiative to build a new HPC environment to support its work on epigenetic, genetic, molecular and cellular origins of cancer – all of which require substantial computational resources. VARI (Grand Rapids, Michigan) is part of Van Andel Institute.

With the HPC building project largely finished, Zack Ramjan, research computing architect for VARI, recalled wryly, “About 10 months ago, we decided we were going to get into the business of cryo-EM. That was news to me and maybe news to many of us here. That suite of three instruments has huge data needs. So we went back and luckily the design that we had was rock solid that’s where we kind of started adding.” He’d been recruited from USC in late 2014 specifically to lead the effort to create an HPC environment for scientific computing.

Titan Krios

The response was to re-examine the storage system, which would absorb the bulk of the new workload strain, and deploy expanded DDN storage – GS7K appliances and WOS – to cope with demand expected from three new cryo-EMs (FEI Titan Krios, FEI Arctica, and smaller instrument for QC). Taken together, the original HPC building effort and changes made later on the fly showcase the rapidly changing choices often confronted by “smaller” research institutions mounting HPC overhauls.

Working with DDN, Silicon Mechanics, and Bright Computing, VARI developed a modest-size hybrid cluster-cloud environment with roughly 2,000 cores, 2.2 petabytes of storage, and 40Gb Ethernet throughout. Major components include private-cloud hosting with OpenStack, Big Data analytics, petabyte-scale distributed/parallel storage, and cluster/grid computing. The work required close collaboration with VARI researcher – roughly 32 groups of varying size – to design and support computing workloads in genomics, epigenetics, next-gen sequencing, molecular-dynamics, bioinformatics and biostatistics

As for many similar-sized institutions, bringing order to the storage architecture was a major challenge. Without centralized HPC resources in-house, individual investigators (and groups) tend to go it alone creating a chaotic disconnected storage landscape.

“These pools of storage were scattered and independent. They were small, not scalable, and intended for specific use cases,” he recalled. “I wanted to replace all that with a single solution that could support HPC because it’s not just about the storage capacity; we also need to support access to that data in a high performance way, [for] moving data very fast, in parallel, to many machines at once.”

A wide range of instruments – sequencers and cryo-EM are just two – required access to storage. Workflows were mixed. Data from external collaborators and other consortia were often brought in-house and had a way of “multiplying after being worked on.” Ramjan’s plan was to centralize and simplify. Data would stream directly from instruments to storage. Investigator created data would likewise be captured in one place.

“There’s no analysis storage and instrument storage, it’s all one storage. The data goes straight to a DDN device. My design was to remove copy and duplications. It comes in one time and users are working on it. It’s a tiered approach. So data goes straight into the highest performing tier, from there, there is no more movement.” DDN GS7K devices comprise this higher performing tier.

As the data ‘cools’ and investigators move to new projects, “We may have to retain the data due to obligations or the user wants to keep it around; then we don’t want to keep ‘cold’ data on our highest performing device. Behind the scenes this data is automatically moved to a slower and more economical tier,” said Ramjan. This is the WOS controlled tier. It’s also where much of the cryo-EM data ends up after initial processing.

DDN GRIDScaler-GS7K

Physically there are actually four places the data can be although the user only sees one, emphasized Ramjan. “It’s either on our mirrored pool – we have two GS7Ks, one either side of the building for disaster recovery in terms of a flood or tornado something like that. If the data doesn’t need to have that level of protection it will be on one of the GSK7s or it will be replicated on WOS. There are two WOS devices also spread out in the same way so the data could be sitting mirrored, replicated, on either side. The lowest level of protection would be a single WOS device.”

“Primary data being – data we’re making here, it came of a machine, or there’s no recreating it because the sample is destroyed – we consider that worthy of full replication sitting in two places on the two GS7Ks. If the user lets it cool down, it will go to the two WOS devices and inside those devices is also a RAID so you can say the replication factor is 2-plus. We maintain that for our instrument data.”

Data movement is widely controlled by policy capabilities in the file system. Automating data flow from instruments in this way, for example, greatly reduces steps and admin requirements. Choosing an effective parallel file system is a key component in such a scheme and reduces the need for additional tools.

“There are really only three options for a very high performance file system,” said Ramjan, “GPFS (now Spectrum Scale from IBM), Lustre, and OneFS (Dell DMC/Isilon).” OneFS, said Ramjan, which VARI had earlier experience with, was cost-prohibitive compared to the other choices. He also thinks Lustre is more difficult to work than GPFS and lacked key features.

“We had Isilon before. I won’t say anything bad about it but pricewise, but it was pretty painful. I spent a lot of time exploring both of the others. Lustre is by no means a bad option, but for us the right fit was GPFS. I needed something that was more appliance based. You know we’re not the size of the university of Michigan or USC or a massive institute with 100 guys in the IT department ready to work on this. We wanted to bring something in quick that would be well supported.

“I felt Lustre would require more labor and time than I was willing to spend and it didn’t have some of the things GPFS does like tiering and rule-based tiering and easier expansion. DDN could equally have sold us a Lustre GSK too if we wanted,” he said.

Zack Ramjan-VARI

On balance, “Deploying DDN’s end-to-end storage solution has allowed us to elevate the standard of protection, increase compliance and push boundaries on a single, highly scalable storage platform,” said Ramjan. “We’ve also saved hundreds of thousands of dollars by centralizing the storage of our data-intensive research and a dozen data-hungry scientific instruments on DDN.”

Interesting side note: “The funny things was the vendors of the microscopes didn’t know anything about IT so they couldn’t actually tell us concretely what we’d need. For example, would 10Gig network be sufficient? They couldn’t answer of those questions and they still can’t unfortunately. It put me in quite a bind. I ended up talking with George Vacek at DDN and he pointed me towards three other cryo-EM users also using DDN, which turned out to be a great source of support.”

Storage, of course is only part of the HPC puzzle. Ramjan was replacing a systems that had more in common with traditional corporate enterprise systems than with scientific computing platforms. Starting from scratch, he had a fair degree of freedom in selecting the architecture and choosing components. He says going with a hybrid cluster/cloud architecture was the correct choice.

Silicon Mechanics handled the heavy lifting with regard to hardware and integration. The Bright Computing provisioning and management platform was used. There are also heterogeneous computing elements although accelerators were not an early priority.

“The genomics stuff – sequencing, genotyping, etc. – that we’ve been doing doesn’t benefit much from GPUs, but the imaging analysis we are getting into does. So we do have a mix of nodes, some with accelerators, although they are all very similar at the main processer. The nodes all have Intel Xeons with a lot of memory, fast SSD, and fast network connections. We have some [NVIDIA] K80s and are bringing in some of the new GTX 1080s. I’m pretty excited about the 1080s because they are a quarter of the cost and in our use case seem to be performing just as well if not a little but better,” said Ramjan.

“I had the option of using InfiniBand, but said listen we know Ethernet, we can do Ethernet in a high performance way, let’s just stick with it at this time. Now there’s up to a 100 Gig Ethernet.”

In going with the hybrid HPC cluster/cloud route, Ramjan evaluated public cloud options. “I wanted to be sure it made sense to do it in-house (OpenStack) when I could just put it in Google’s cloud or Amazon or Microsoft. We ran the numbers and I think cloud computing is great for someone doing a little bit of computing a few times year, but not for us.” It’s not the cost of cycles; they are cheap enough. It’s data movement and storage charges.

Cloud bursting to the public cloud is an open question for Ramjan. He is already working with Bright Computing on a system environment update, expected to go live in March, that will have cloud bursting capability. He wonders how much it will be used.

“It’s good for rare cases. Still you have to balance that against just acquiring more nodes. The data movement in and out of the cloud is where they get you on price. With a small batch I could see it being economical but I have an instrument here that can produce 13 TB a day – moving that is going to be very expensive. We have people doing molecular dynamics, low data volume, low storage volume, but high CPU requirements. But even then latency is a factor.”

System adoption has been faster than expected. “I thought utilization would ramp up slowly, but [already] we’re sitting at 80 percent utilization on a constant basis often at 100 percent. It surprised me how fast and how hungry our investigators were for these resources. If you would have asked them beforehand ‘do you need this’ they probably would have said no.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Fluid HPC: How Extreme-Scale Computing Should Respond to Meltdown and Spectre

February 15, 2018

The Meltdown and Spectre vulnerabilities are proving difficult to fix, and initial experiments suggest security patches will cause significant performance penalties to HPC applications. Even as these patches are rolled o Read more…

By Pete Beckman

Intel Touts Silicon Spin Qubits for Quantum Computing

February 14, 2018

Debate around what makes a good qubit and how best to manufacture them is a sprawling topic. There are many insistent voices favoring one or another approach. Referencing a paper published today in Nature, Intel has offe Read more…

By John Russell

Brookhaven Ramps Up Computing for National Security Effort

February 14, 2018

Last week, Dan Coats, the director of Director of National Intelligence for the U.S., warned the Senate Intelligence Committee that Russia was likely to meddle in the 2018 mid-term U.S. elections, much as it stands accused of doing in the 2016 Presidential election. Read more…

By John Russell

HPE Extreme Performance Solutions

Safeguard Your HPC Environment with the World’s Most Secure Industry Standard Servers

Today’s organizations operate in an environment with ever-evolving threats, and in order to protect themselves they must continuously bolster their security strategy. Hewlett Packard Enterprise (HPE) and Intel® are addressing modern security challenges with the world’s most secure industry standard servers powered by the latest generation of Intel® Xeon® Scalable processors. Read more…

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended to make it easier, faster and cheaper to train and run machi Read more…

By Doug Black

Fluid HPC: How Extreme-Scale Computing Should Respond to Meltdown and Spectre

February 15, 2018

The Meltdown and Spectre vulnerabilities are proving difficult to fix, and initial experiments suggest security patches will cause significant performance penal Read more…

By Pete Beckman

Brookhaven Ramps Up Computing for National Security Effort

February 14, 2018

Last week, Dan Coats, the director of Director of National Intelligence for the U.S., warned the Senate Intelligence Committee that Russia was likely to meddle in the 2018 mid-term U.S. elections, much as it stands accused of doing in the 2016 Presidential election. Read more…

By John Russell

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

The Food Industry’s Next Journey — from Mars to Exascale

February 12, 2018

Global food producer and one of the world's leading chocolate companies Mars Inc. has a unique perspective on the impact that exascale computing will have on the food industry. Read more…

By Scott Gibson, Oak Ridge National Laboratory

Singularity HPC Container Start-Up – Sylabs – Emerges from Stealth

February 8, 2018

The driving force behind Singularity, the popular HPC container technology, is bringing the open source platform to the enterprise with the launch of a new vent Read more…

By George Leopold

Dell EMC Debuts PowerEdge Servers with AMD EPYC Chips

February 6, 2018

AMD notched another EPYC processor win today with Dell EMC’s introduction of three PowerEdge servers (R6415, R7415, and R7425) based on the EPYC 7000-series p Read more…

By John Russell

‘Next Generation’ Universe Simulation Is Most Advanced Yet

February 5, 2018

The research group that gave us the most detailed time-lapse simulation of the universe’s evolution in 2014, spanning 13.8 billion years of cosmic evolution, is back in the spotlight with an even more advanced cosmological model that is providing new insights into how black holes influence the distribution of dark matter, how heavy elements are produced and distributed, and where magnetic fields originate. Read more…

By Tiffany Trader

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Leading Solution Providers

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

V100 Good but not Great on Select Deep Learning Aps, Says Xcelerit

November 27, 2017

Wringing optimum performance from hardware to accelerate deep learning applications is a challenge that often depends on the specific application in use. A benc Read more…

By John Russell

SC17: Singularity Preps Version 3.0, Nears 1M Containers Served Daily

November 1, 2017

Just a few months ago about half a million jobs were being run daily using Singularity containers, the LBNL-founded container platform intended for HPC. That wa Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This