SDSC’s New Storage Cloud: ‘Flickr for Scientific Data’

By Michael Feldman

October 6, 2011

Last month, the San Diego Supercomputer Center launched what it believes is “the largest academic-based cloud storage system in the U.S.” The infrastructure is designed to serve the country’s research community and will be available to scientists and engineers from essentially any government agency that needs to archive and share super-sized data sets.

Certainly the need for such a service exists. The modern practice of science is a community activity and the way researchers collaborate is by sharing their data. Before the emergence of cloud, the main way to accomplish that was via emails and sending manuscripts back and forth over the internet. But with the coalescence of some old and new technologies, there are now economically viable ways for sharing really large amounts of data with colleagues.

In the press release describing the storage cloud, SDSC director Michael Norman described it thusly: “We believe that the SDSC Cloud may well revolutionize how data is preserved and shared among researchers, especially massive datasets that are becoming more prevalent in this new era of data-intensive research and computing.” Or as he told us more succinctly, “I think of it as Flickr for scientific data.”

It’s not just for university academics. Science projects under the DOE, NIH, NASA, and others US agencies are all welcome. Even though the center is underwritten by the NSF, it gets large amounts funding and researchers from all of those organizations. Like most NSF-supported HPC centers today, SDSC is a multi-agency hub.

Norman says that the immediate goal of this project is to support the current tape archive customers at SDSC with something that allows for data sharing. For collaboration, he says, tape archive is probably the worst possible solution. Not only is the I/O bandwidth too low, but with a tape platform, there is always a computer standing between you and your data.

With a disk-based cloud solution, you automatically get higher bandwidth, but more importantly, a web interface for accessing data. Every data file is provided a unique URL, making the information globally accessible from any web client. “It can talk to your iPhone as easily as it can talk to your mainframe,” says Norman.

The initial cloud infrastructure consists of 5.5 petabytes of disk capacity linked to servers via a couple of Arista Networks 7508 switches, which provide 10 terabits/second of connectivity. Dell R610 nodes are used for the storage servers, as well as for load balancing and proxy servers. The storage hardware is made up of Supermicro SC847E26 JBODs, with each JBOD housing 45 3TB Seagate disks. All of this infrastructure is housed and maintained at SDSC.

The cloud storage will replace the current tape archive at the center, in this case a StorageTek system that currently holds about a petabyte of user data spread across 30 or 40 projects. Over the next 12 to 18 months, SDSC will migrate the data, along with their customers, over to the cloud and mothball the StorageTek hardware.

According to Norman some of these tape users would like to move other data sets into these archives and the cloud should make that process a lot smoother. “We are setting this up as a sustainable business and hope to have customers who use our cloud simply as preservation environment,” he says. For example, they’re already talking with a NASA center that is looking to park their mission data somewhere accessible, but in an archive type environment.

The move to a storage cloud was not all locally motivated however. Government agencies like the NSF and NIH began mandating data sharing plans for all research projects. Principal investigators (PIs) can allocate up to 5 percent of their grant funding for data storage, but as it turns out, on a typical five- or six-figure research grant, that’s not very much money.

In order for such data sharing to be economically viable to researchers, it basically has to be a cost-plus model. Norman thinks they have achieved that with their pricing model, although admits that “if you asked researchers what would be the right price, it would be zero.”

For 100 GB of storage, rates are $3.25/month for University of California (UC) users, 5.66/month for UC affiliates and $7.80/month for customers outside the UC sphere. Users who are looking for a big chunk of storage in excess of 200TB will need to pay for the extra infrastructure, in what the program refers to as their “micro-condo” offering.

The condo pricing scheme is more complex, but is offered to users with really large datasets and for research grants that include storage considerations for proposals and budgeting. And even though this model doesn’t provide for a transparently elastic cloud, the condo model at least makes the infrastructure expandable. According to Norman, their cloud is designed to scale up into the hundreds of petabytes realm.

Although data owners pay for capacity, thanks to government-supported science networks , data consumers don’t pay for I/O bandwidth. Wide are networks under projects such as CENIC (Corporation for Education Network Initiatives in California), ESNet (Energy Sciences Network), and XSEDE (Extreme Science and Engineering Discovery Environment) are public investments that can be leveraged by SDSC’s cloud. That can be a huge advantage over commercial storage clouds like Amazon’s Simple Storage Service (S3), where users have to account for data transfer costs.

While some researchers may end up using commercial offerings like Amazon S3, Norman thinks those types of setups generally don’t cater to academic types and are certainly not part of most researchers’ mindsets. They are also missing the some of the high-performance networking enabled by big 10GbE pipes and low-latency switching at SDSC.

Whether the center’s roll-your-own cloud will be able to compete against commercial clouds on a long-term basis remains to be seen. One of the reasons a relatively small organization like SDSC can even build such a beast today is thanks in large part to the availability of cheap commodity hardware and the native expertise at the center to build high-end storage systems from parts.

There is also OpenStack — an open-source cloud OS that the SDSC is using as the basis of their offering. Besides being essentially free for the taking, the non-proprietary nature of OpenStack also means the center will not be locked into any particular software or hardware vendors down the road.

“With OpenStack going open source, it’s now possible for anybody to set up a little cloud business,” explains Norman “We’re just doing it in an academic environment.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

InfiniBand Still Tops in Supercomputing

July 19, 2018

In the competitive global HPC landscape, system and processor vendors, nations and end user sites certainly get a lot of attention--deservedly so--but more than ever, the network plays a crucial role. While fast, perform Read more…

By Tiffany Trader

HPC for Life: Genomics, Brain Research, and Beyond

July 19, 2018

During the past few decades, the life sciences have witnessed one landmark discovery after another with the aid of HPC, paving the way toward a new era of personalized treatments based on an individual’s genetic makeup Read more…

By Warren Froelich

WCRP’s New Strategic Plan for Climate Research Highlights the Importance of HPC

July 19, 2018

As climate modeling increasingly leverages exascale computing and researchers warn of an impending computing gap in climate research, the World Climate Research Programme (WCRP) is developing its new Strategic Plan – and high-performance computing is slated to play a critical role. Read more…

By Oliver Peckham

HPE Extreme Performance Solutions

Introducing the First Integrated System Management Software for HPC Clusters from HPE

How do you manage your complex, growing cluster environments? Answer that big challenge with the new HPC cluster management solution: HPE Performance Cluster Manager. Read more…

IBM Accelerated Insights

Are Your Software Licenses Impeding Your Productivity?

In my previous article, Improving chip yield rates with cognitive manufacturing, I highlighted the costs associated with semiconductor manufacturing, and how cognitive methods can yield benefits in both design and manufacture.  Read more…

U.S. Exascale Computing Project Releases Software Technology Progress Report

July 19, 2018

As is often noted the race to exascale computing isn’t just about hardware. This week the U.S. Exascale Computing Project (ECP) released its latest Software Technology (ST) Capability Assessment Report detailing progress so far. Read more…

By John Russell

InfiniBand Still Tops in Supercomputing

July 19, 2018

In the competitive global HPC landscape, system and processor vendors, nations and end user sites certainly get a lot of attention--deservedly so--but more than Read more…

By Tiffany Trader

HPC for Life: Genomics, Brain Research, and Beyond

July 19, 2018

During the past few decades, the life sciences have witnessed one landmark discovery after another with the aid of HPC, paving the way toward a new era of perso Read more…

By Warren Froelich

D-Wave Breaks New Ground in Quantum Simulation

July 16, 2018

Last Friday D-Wave scientists and colleagues published work in Science which they say represents the first fulfillment of Richard Feynman’s 1982 notion that Read more…

By John Russell

AI Thought Leaders on Capitol Hill

July 14, 2018

On Thursday, July 12, the House Committee on Science, Space, and Technology heard from four academic and industry leaders – representatives from Berkeley Lab, Argonne Lab, GE Global Research and Carnegie Mellon University – on the opportunities springing from the intersection of machine learning and advanced-scale computing. Read more…

By Tiffany Trader

HPC Serves as a ‘Rosetta Stone’ for the Information Age

July 12, 2018

In an age defined and transformed by its data, several large-scale scientific instruments around the globe might be viewed as a ‘mother lode’ of precious data. With names seemingly created for a ‘techno-speak’ glossary, these interferometers, cyclotrons, sequencers, solenoids, satellite altimeters, and cryo-electron microscopes are churning out data in previously unthinkable and seemingly incomprehensible quantities -- billions, trillions and quadrillions of bits and bytes of electro-magnetic code. Read more…

By Warren Froelich

Tsinghua Powers Through ISC18 Field

July 10, 2018

Tsinghua University topped all other competitors at the ISC18 Student Cluster Competition with an overall score of 88.43 out of 100. This gives Tsinghua their s Read more…

By Dan Olds

HPE, EPFL Launch Blue Brain 5 Supercomputer

July 10, 2018

HPE and the Ecole Polytechnique Federale de Lausannne (EPFL) Blue Brain Project yesterday introduced Blue Brain 5, a new supercomputer built by HPE, which displ Read more…

By John Russell

Pumping New Life into HPC Clusters, the Case for Liquid Cooling

July 10, 2018

High Performance Computing (HPC) faces some daunting challenges in the coming years as traditional, industry-standard systems push the boundaries of data center Read more…

By Scott Tease

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17

Altair

AMD @ SC17

AMD

ASRock Rack @ SC17

ASRock Rack

CEJN @ SC17

CEJN

DDN Storage @ SC17

DDN Storage

Huawei @ SC17

Huawei

IBM @ SC17

IBM

IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17

Intel

Lenovo @ SC17

Lenovo

Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17

Microsoft

Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17

Supericro

Tyan @ SC17

Tyan

Univa @ SC17

Univa

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This