SDSC’s New Storage Cloud: ‘Flickr for Scientific Data’

By Michael Feldman

October 6, 2011

Last month, the San Diego Supercomputer Center launched what it believes is “the largest academic-based cloud storage system in the U.S.” The infrastructure is designed to serve the country’s research community and will be available to scientists and engineers from essentially any government agency that needs to archive and share super-sized data sets.

Certainly the need for such a service exists. The modern practice of science is a community activity and the way researchers collaborate is by sharing their data. Before the emergence of cloud, the main way to accomplish that was via emails and sending manuscripts back and forth over the internet. But with the coalescence of some old and new technologies, there are now economically viable ways for sharing really large amounts of data with colleagues.

In the press release describing the storage cloud, SDSC director Michael Norman described it thusly: “We believe that the SDSC Cloud may well revolutionize how data is preserved and shared among researchers, especially massive datasets that are becoming more prevalent in this new era of data-intensive research and computing.” Or as he told us more succinctly, “I think of it as Flickr for scientific data.”

It’s not just for university academics. Science projects under the DOE, NIH, NASA, and others US agencies are all welcome. Even though the center is underwritten by the NSF, it gets large amounts funding and researchers from all of those organizations. Like most NSF-supported HPC centers today, SDSC is a multi-agency hub.

Norman says that the immediate goal of this project is to support the current tape archive customers at SDSC with something that allows for data sharing. For collaboration, he says, tape archive is probably the worst possible solution. Not only is the I/O bandwidth too low, but with a tape platform, there is always a computer standing between you and your data.

With a disk-based cloud solution, you automatically get higher bandwidth, but more importantly, a web interface for accessing data. Every data file is provided a unique URL, making the information globally accessible from any web client. “It can talk to your iPhone as easily as it can talk to your mainframe,” says Norman.

The initial cloud infrastructure consists of 5.5 petabytes of disk capacity linked to servers via a couple of Arista Networks 7508 switches, which provide 10 terabits/second of connectivity. Dell R610 nodes are used for the storage servers, as well as for load balancing and proxy servers. The storage hardware is made up of Supermicro SC847E26 JBODs, with each JBOD housing 45 3TB Seagate disks. All of this infrastructure is housed and maintained at SDSC.

The cloud storage will replace the current tape archive at the center, in this case a StorageTek system that currently holds about a petabyte of user data spread across 30 or 40 projects. Over the next 12 to 18 months, SDSC will migrate the data, along with their customers, over to the cloud and mothball the StorageTek hardware.

According to Norman some of these tape users would like to move other data sets into these archives and the cloud should make that process a lot smoother. “We are setting this up as a sustainable business and hope to have customers who use our cloud simply as preservation environment,” he says. For example, they’re already talking with a NASA center that is looking to park their mission data somewhere accessible, but in an archive type environment.

The move to a storage cloud was not all locally motivated however. Government agencies like the NSF and NIH began mandating data sharing plans for all research projects. Principal investigators (PIs) can allocate up to 5 percent of their grant funding for data storage, but as it turns out, on a typical five- or six-figure research grant, that’s not very much money.

In order for such data sharing to be economically viable to researchers, it basically has to be a cost-plus model. Norman thinks they have achieved that with their pricing model, although admits that “if you asked researchers what would be the right price, it would be zero.”

For 100 GB of storage, rates are $3.25/month for University of California (UC) users, 5.66/month for UC affiliates and $7.80/month for customers outside the UC sphere. Users who are looking for a big chunk of storage in excess of 200TB will need to pay for the extra infrastructure, in what the program refers to as their “micro-condo” offering.

The condo pricing scheme is more complex, but is offered to users with really large datasets and for research grants that include storage considerations for proposals and budgeting. And even though this model doesn’t provide for a transparently elastic cloud, the condo model at least makes the infrastructure expandable. According to Norman, their cloud is designed to scale up into the hundreds of petabytes realm.

Although data owners pay for capacity, thanks to government-supported science networks , data consumers don’t pay for I/O bandwidth. Wide are networks under projects such as CENIC (Corporation for Education Network Initiatives in California), ESNet (Energy Sciences Network), and XSEDE (Extreme Science and Engineering Discovery Environment) are public investments that can be leveraged by SDSC’s cloud. That can be a huge advantage over commercial storage clouds like Amazon’s Simple Storage Service (S3), where users have to account for data transfer costs.

While some researchers may end up using commercial offerings like Amazon S3, Norman thinks those types of setups generally don’t cater to academic types and are certainly not part of most researchers’ mindsets. They are also missing the some of the high-performance networking enabled by big 10GbE pipes and low-latency switching at SDSC.

Whether the center’s roll-your-own cloud will be able to compete against commercial clouds on a long-term basis remains to be seen. One of the reasons a relatively small organization like SDSC can even build such a beast today is thanks in large part to the availability of cheap commodity hardware and the native expertise at the center to build high-end storage systems from parts.

There is also OpenStack — an open-source cloud OS that the SDSC is using as the basis of their offering. Besides being essentially free for the taking, the non-proprietary nature of OpenStack also means the center will not be locked into any particular software or hardware vendors down the road.

“With OpenStack going open source, it’s now possible for anybody to set up a little cloud business,” explains Norman “We’re just doing it in an academic environment.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Live and in Color, Meet the European Student Cluster Teams

November 21, 2017

The SC17 Student Cluster Competition welcomed two teams from Europe, the German team of FAU/TUC and Team Poland, the pride of Warsaw. Let's get to know them better through the miracle of video..... Team FAU/TUC is a c Read more…

By Dan Olds

SC17 Student Cluster Kick Off – Guts, Glory, Grep

November 21, 2017

The SC17 Student Cluster Competition started with a well-orchestrated kick-off emceed by Stephen Harrell, the competition chair. It began with a welcome from SC17 chair Bernd Mohr, where he lauded the competition for Read more…

By Dan Olds

Activist Investor Starboard Buys 10.7% Stake in Mellanox; Sale Possible?

November 20, 2017

Starboard Value has reportedly taken a 10.7 percent stake in interconnect specialist Mellanox Technologies, and according to the Wall Street Journal, has urged the company “to improve its margins and stock and explore Read more…

By John Russell

HPE Extreme Performance Solutions

Harness Scalable Petabyte Storage with HPE Apollo 4510 and HPE StoreEver

As a growing number of connected devices challenges IT departments to rapidly collect, manage, and store troves of data, organizations must adopt a new generation of IT to help them operate quickly and intelligently. Read more…

Installation of Sierra Supercomputer Steams Along at LLNL

November 20, 2017

Sierra, the 125 petaflops (peak) machine based on IBM’s Power9 chip being built at Lawrence Livermore National Laboratory, sometimes takes a back seat to Summit, the ~200 petaflops system being built at Oak Ridge Natio Read more…

By John Russell

Live and in Color, Meet the European Student Cluster Teams

November 21, 2017

The SC17 Student Cluster Competition welcomed two teams from Europe, the German team of FAU/TUC and Team Poland, the pride of Warsaw. Let's get to know them bet Read more…

By Dan Olds

SC17 Student Cluster Kick Off – Guts, Glory, Grep

November 21, 2017

The SC17 Student Cluster Competition started with a well-orchestrated kick-off emceed by Stephen Harrell, the competition chair. It began with a welcome from Read more…

By Dan Olds

SC Bids Farewell to Denver, Heads to Dallas for 30th

November 17, 2017

After a jam-packed four-day expo and intensive six-day technical program, SC17 has wrapped up another successful event that brought together nearly 13,000 visit Read more…

By Tiffany Trader

SC17 Keynote – HPC Powers SKA Efforts to Peer Deep into the Cosmos

November 17, 2017

This week’s SC17 keynote – Life, the Universe and Computing: The Story of the SKA Telescope – was a powerful pitch for the potential of Big Science projects that also showcased the foundational role of high performance computing in modern science. It was also visually stunning. Read more…

By John Russell

How Cities Use HPC at the Edge to Get Smarter

November 17, 2017

Cities are sensoring up, collecting vast troves of data that they’re running through predictive models and using the insights to solve problems that, in some Read more…

By Doug Black

Student Cluster LINPACK Record Shattered! More LINs Packed Than Ever before!

November 16, 2017

Nanyang Technological University, the pride of Singapore, utterly destroyed the Student Cluster Competition LINPACK record by posting a score of 51.77 TFlop/s a Read more…

By Dan Olds

Hyperion Market Update: ‘Decent’ Growth Led by HPE; AI Transparency a Risk Issue

November 15, 2017

The HPC market update from Hyperion Research (formerly IDC) at the annual SC conference is a business and social “must,” and this year’s presentation at S Read more…

By Doug Black

Nvidia Focuses Its Cloud Containers on HPC Applications

November 14, 2017

Having migrated its top-of-the-line datacenter GPU to the largest cloud vendors, Nvidia is touting its Volta architecture for a range of scientific computing ta Read more…

By George Leopold

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Leading Solution Providers

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

Share This