Cray Offers Supercomputing as a Service, Targets Biotechs First

By John Russell

May 16, 2017

Leading supercomputer vendor Cray and datacenter/cloud provider the Markley Group today announced plans to jointly deliver supercomputing as a service. The initial offering provides access to Cray’s Urika GX platform, housed in Markley’s massive Boston datacenter, and focused on the many biotechs in the region. The partners say the service is unique and they plan to address other verticals with a range of Cray products over time.

“We want to take a targeted approach and are going to be really thoughtful about what vertical is next or what type of infrastructure best solves the use case represented by that vertical and where the need or demand is,” said Fred Kohout, senior vice president of products and chief marketing officer, Cray. “We want to be customer-led here.” Certainly the Boston-Cambridge area is a mecca for large and small life sciences organizations in both industry and academia.

Cray Urika GX

Supercomputing as a service, say Cray and Markley, will make supercomputing available to many users who are unable to afford or support such resources themselves or who only need those resources sporadically. They also argue supercomputing produces a significant performance advantage over traditional HPC clusters, in this case on the order 5X for the genomics workloads evaluated so far.

“This is supercomputing. It sounds like a marketing term but it’s really different. We are not talking about 1000 Dell blades all in the same datacenter. We are talking about the Cray Aries interconnect and optimizations such Cray Graph Engine (CGE) that are qualitatively different than just having lots of CPUs close to each other,” said Patrick Gilmore, chief technology officer, Markley.

No doubt there will be kinks to iron out, but supercomputing as a service is an interesting paradigm shift and potential market expander for Cray. The partners declined to say much about pricing, other than they were looking at prices for HPC-in-the-cloud resources and that there would be a premium over those; how much wasn’t revealed. It will be more than 10 percent higher but it won’t “be a discouraging” premium say the partners.

Cray bills the Urika GX as the first agile analytics platform that fuses supercomputing abilities with open enterprise standards. Its Cray Graph Engine provides optimized pattern-matching and is tuned to leverage the scalable parallelization and performance of the Urika-GX. These strengths are particularly valuable for many bioinformatics tasks.

“Research and development, particularly within life sciences, biotech and pharmaceutical companies, is increasingly data driven. Advances in genome sequencing technology mean that the sheer volume of data and analysis continues to strain legacy infrastructures,” said Chris Dwan, who led research computing at both the Broad Institute and the New York Genome Center. “The shortest path to breakthroughs in medicine is to put the very best technologies in the hands of the researchers, on their own schedule. Combining the strengths of Cray and Markley into supercomputing as a service does exactly that.”

As explained by Jeff Flanagan, executive vice president, Markley Group, “the service will not be offered as a ‘partition service’ but as a reservation service. Companies and institutions will have the opportunity to reserve time on the various Cray machines and we are starting with the Urika GX.” One attractive aspect of starting with the Urika GX is that it looks a like a lot like standard Linux box and has a fair amount of pre-installed software (Hadoop is one example), according to Ted Slater, global head of healthcare and life sciences at Cray.

Markley and Cray have tried to remove much of the heavy lifting required for running finicky supercomputers; still, using the service isn’t trivial. As part of the pre-staging, users need to move their data into the Markley datacenter and also make sure their software will actually run when the user’s schedule time on the Urika GX occurs. This can take some time (e.g. a month). That said Markley’s datacenter has a variety of high performance resources (InfiniBand/100Gig Ethernet, petabytes of object storage, fast SSDs, etc.).

Uploading data shouldn’t be problem for most clients says Gilmore, “If you are in New England the Markley datacenter is pretty much the center of the Internet – something like 70 percent of the internet fiber goes through that building so there are lots of ways to get into the building. We’ll get you a connection either a VPN or a direct fiber. Most of the genomics customers we’re talking to are already customers with colocation [here], so there is probably direct fiber from their offices, their sequencers, to the building.”

Data can live on a storage array users already have in the colocation facility or be placed on a Markley array. Cray and Markley have also set up a virtualized version of the Urika GX to serve as a test platform for scripts, and “to make sure they don’t waste time on a very, very expensive, very fast supercomputer, when the reservation comes up.” Markley will then preload the data or make the connection to their array depending upon their preference.

“We are actually going to put the virtual machine behind a load balancer so that when you log onto it and test it, when it’s your turn to come up, you’ll just shut down the virtual machine. We will migrate the virtual machine onto the Cray for you. We’ll reconfigure the load balancing so you don’t actually have to do anything different. It works just like it did when on the virtual machine. We’ll also be transferring all the data for you,” said Gilmore. “So there is a little bit of preorganization to do, but the idea is to make this is as easy and seamless to do as possible. Users log on again, make sure the program works properly, then Monday morning (at the prearranged reserved time) they can turn on the jobs and away they go.”

Cray and Markley say little about the actual projects they did the beta testing on; however on the Cray web site there is brief account of genomics project conducted by the Broad Institute using the Urika GX. It no doubt has lessons. Hail, an open source scalable framework for exploring and analyzing genetic data at massive scale, was used in the Broad project.

Hail is built on top of Apache Spark, and can analyze terabyte-scale genetic data. “Still under active development, Hail is used in medical and population genomics at the Broad for a variety of diseases. It also serves as the core analysis platform for the Genome Aggregation Database (gnomAD) – the largest publicly available collection of human DNA sequencing data, and a critical resource for the interpretation of disease-causing genetic changes,” according to the Cray document.

Hail is also the tool that’s pre-installed on the Cray-Markley Urika GX offering although Slate says users can choose to port the tool of choice, such as GATK (also a Broad project). Slater said Cray has also been working with a genome assembler on its XC platform and is working to port it to GX for evaluation.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

GTC 2019: Chief Scientist Bill Dally Provides Glimpse into Nvidia Research Engine

March 22, 2019

Amid the frenzy of GTC this week – Nvidia’s annual conference showcasing all things GPU (and now AI) – William Dally, chief scientist and SVP of research, provided a brief but insightful portrait of Nvidia’s rese Read more…

By John Russell

ORNL Helps Identify Challenges of Extremely Heterogeneous Architectures

March 21, 2019

Exponential growth in classical computing over the last two decades has produced hardware and software that support lightning-fast processing speeds, but advancements are topping out as computing architectures reach thei Read more…

By Laurie Varma

Interview with 2019 Person to Watch Jim Keller

March 21, 2019

On the heels of Intel's reaffirmation that it will deliver the first U.S. exascale computer in 2021, which will feature the company's new Intel Xe architecture, we bring you our interview with our 2019 Person to Watch Jim Keller, head of the Silicon Engineering Group at Intel. Read more…

By HPCwire Editorial Team

HPE Extreme Performance Solutions

HPE and Intel® Omni-Path Architecture: How to Power a Cloud

Learn how HPE and Intel® Omni-Path Architecture provide critical infrastructure for leading Nordic HPC provider’s HPCFLOW cloud service.

powercloud_blog.jpgFor decades, HPE has been at the forefront of high-performance computing, and we’ve powered some of the fastest and most robust supercomputers in the world. Read more…

IBM Accelerated Insights

Insurance: Where’s the Risk?

Insurers are facing extreme competitive challenges in their core businesses. Property and Casualty (P&C) and Life and Health (L&H) firms alike are highly impacted by the ongoing globalization, increasing regulation, and digital transformation of their client bases. Read more…

What’s New in HPC Research: TensorFlow, Buddy Compression, Intel Optane & More

March 20, 2019

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

GTC 2019: Chief Scientist Bill Dally Provides Glimpse into Nvidia Research Engine

March 22, 2019

Amid the frenzy of GTC this week – Nvidia’s annual conference showcasing all things GPU (and now AI) – William Dally, chief scientist and SVP of research, Read more…

By John Russell

At GTC: Nvidia Expands Scope of Its AI and Datacenter Ecosystem

March 19, 2019

In the high-stakes race to provide the AI life-cycle solution of choice, three of the biggest horses in the field are IBM, Intel and Nvidia. While the latter is only a fraction of the size of its two bigger rivals, and has been in business for only a fraction of the time, Nvidia continues to impress with an expanding array of new GPU-based hardware, software, robotics, partnerships and... Read more…

By Doug Black

Nvidia Debuts Clara AI Toolkit with Pre-Trained Models for Radiology Use

March 19, 2019

AI’s push into healthcare got a boost yesterday with Nvidia’s release of the Clara Deploy AI toolkit which includes 13 pre-trained models for use in radiolo Read more…

By John Russell

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

Oil and Gas Supercloud Clears Out Remaining Knights Landing Inventory: All 38,000 Wafers

March 13, 2019

The McCloud HPC service being built by Australia’s DownUnder GeoSolutions (DUG) outside Houston is set to become the largest oil and gas cloud in the world th Read more…

By Tiffany Trader

Quick Take: Trump’s 2020 Budget Spares DoE-funded HPC but Slams NSF and NIH

March 12, 2019

U.S. President Donald Trump’s 2020 budget request, released yesterday, proposes deep cuts in many science programs but seems to spare HPC funding by the Depar Read more…

By John Russell

Nvidia Wins Mellanox Stakes for $6.9 Billion

March 11, 2019

The long-rumored acquisition of Mellanox came to fruition this morning with GPU chipmaker Nvidia’s announcement that it has purchased the high-performance net Read more…

By Doug Black

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

Contract Signed for New Finnish Supercomputer

December 13, 2018

After the official contract signing yesterday, configuration details were made public for the new BullSequana system that the Finnish IT Center for Science (CSC Read more…

By Tiffany Trader

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

HPC Reflections and (Mostly Hopeful) Predictions

December 19, 2018

So much ‘spaghetti’ gets tossed on walls by the technology community (vendors and researchers) to see what sticks that it is often difficult to peer through Read more…

By John Russell

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized si Read more…

By Tiffany Trader

Move Over Lustre & Spectrum Scale – Here Comes BeeGFS?

November 26, 2018

Is BeeGFS – the parallel file system with European roots – on a path to compete with Lustre and Spectrum Scale worldwide in HPC environments? Frank Herold Read more…

By John Russell

France to Deploy AI-Focused Supercomputer: Jean Zay

January 22, 2019

HPE announced today that it won the contract to build a supercomputer that will drive France’s AI and HPC efforts. The computer will be part of GENCI, the Fre Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This