Closing Tape’s Generational Gap

By Nicole Hemsoth

October 9, 2013

We are in Broomfield, Colorado today with tape storage company, Spectra Logic, to get a better grasp on the future direction of their storage medium’s role in both HPC and larger scale enterprise environments–both of which are being inundated with massive data growth.

As the event comes to a close, one thing is for certain–for a relatively small company in an industry that is beholden to the perception that it’s tied to an aging technology, Spectra is managing to turn its worldview around to meet a new generation of users in new data-driven markets (web-based retail, social networks, media and entertainment, etc.). The energy was palpable this week around some core refreshes to their existing line of tape products–from the massive T-Finity boxes that loom in supercomputing centers like Blue Waters to smaller tape systems that power video production at major cable networks. And for anyone who says “tape is dead” outside of the big HPC sites–think again…

While the company is set to make a major announcement later this week about its roadmap and some key products, all signs are pointing to a future that is far more heavily defined by their investments in software. During a presentation this morning describing the trajectory of their storage vision, senior executives continually emphasized that although they’re often defined as a hardware-driven company, around 80% of their employees are software engineers.

The new focus for Spectra Logic is moving beyond a hardware-centric view of tape, argued the company’s CEO, Nathan Thompson. At the core of this focus is what they’re calling “deep storage” which is focused on offering a REST interface, persistence, cost effectiveness, efficiency, security and ease of use. While of this is driven by their tape products, the real focus is on offering a native RESTful interface to robotic tape storage systems, Deep Simple Storage Service (DS3) which will allow users more open access to using tape as a long-term storage approach.

Their new DS3 interface, a RESTful interface will be supported by all of their tape products. The company emphasized how it is opening access for new users to tape as well as ISV partners and end user developers who can now leverage software clients that Spectra is making available or write their own clients. Additionally, the DS3 interface allows applications to move large quantities of data without the burdensome process and technologies that once muddled that course. Spectra says that it is thus ideal for accessing tape for large data objects. On the management front, users can tap intelligent data object reads/writes, which they say will optimize tape drive and tape media utilization and performance. DS3 supports deep storage in a wide capacity range with configurations as small as 15 terabytes and can scale past the coming exabyte era in one tape storage system.

When presenting per terabyte-based cost breakdowns across its product line, the weakness of disk as the long term storage format of choice for certain markets was rather hard to question. The problem has always been that moving over to tape, while clearly the cheaper route in terms of ROI over time, might have been clear to users, the usability and data access and movement have proven to be significant challenges.

Thomson says that his company is evolving with the market. As it stands, a great many of the users who are familiar with writing to and using tape are close to retiring and the newest generation of potential users are not likely to learn the same modes of working with tape libraries. Accordingly, as Thompson noted today, there is a new tier of storage that can target “large bulk quantities of data for extended and possibly infinite periods of time while meeting the needs of newer datacenter architectures that leverage storage in the form of data objects and utilize RESTful interfaces.” These interfaces are set to modernize access and use of tape and democratize interactions with archiving (and archived) data.

The key to Spectra’s growth on the software-driven side is contained within the RESTful interface they’ve developed that lets users talk to tape in a more modern way. Thompson pointed to a supercomputing center that is doing weather simulations that wants to add a private cloud into the mix to share data. They’re using the REST interface to allow researchers around the globe to drop their results into the cloud and tape without having the complications and specialization of writing to tape in between.

Spectra’s CMO, Molly Rector, described in detail the many benefits of object storage over file system approaches, noting that ease of use is a major component. Echoing Thompson’s belief that modernizing tape use and access by offering access to it via more common tools (the REST approach) she noted that users can move beyond the nested nature of file systems that require users to understand in detail both location and content to be able to fetch data, especially when at the petabyte level. Objects with their approach are assigned a unique ID to make the physical location of the data irrelevant, which means that objects can be moved across storage pools among one or multiple tiers, they can be shared or copied within an object store and therefore be more accessible for search, data mining and analytics across billions of objects without dealing with moving bulk data via complex, specialized tooling.

The company’s approach to deep storage is enhanced going forward by allowing users to store objects that are self-describing and are written in an open file format. Since the whole goal of tape is to offer a low cost solution to persistent, secure storage designed for data that does not require immediate access, such data can sit for an incredibly long time–until the next storage medium comes along. Migrations of petabytes (or beyond) of data is a nightmare, but by creating portability in this next generation they can make such migrations more seamless.

Thompson detailed Spectra’s growth curve from its roots in 1979 all the way into the present. While the first several years were rather flat, the privately-held company of approximately 400 people, has an attractive curve that is set to lead them into continued growth in the coming year, driven in part by emerging markets that lie outside of traditional HPC areas, including video production and web-based business operations.

Rector pointed to a distributed customer base with HPC customers making up around 18 percent of their overall business. During the event, most of the customers and interested parties we spoke with, including NASCAR’s video production lead, were in the media and entertainment space. Others, including Kevin Graham, Principal Infrastructure Architect at Yahoo, discussed how opening access to tape provides one of the most cost-effective solutions for data that does not power time-critical user services but that acts as the most efficient backend systems for helping the web giant handle internal data-intensive projects.

Spectra cited a few noteworthy revenue figures, including that they’ve seen 16% year over year growth in 2013. They have seen 14% growth in enterprise libraries and 12% growth in their midrange libraries. The company also showed off their R&D investments over the years, which following solid profitability, has hovered between 10 and 14%. This year they pushed 13% into R&D.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

New GE Simulations on Summit to Advance Offshore Wind Power

August 6, 2020

The wind energy sector is a frequent user of high-power simulations, with researchers aiming to optimize wind flows and energy production from the massive turbines. Now, researchers at GE are preparing to undertake a lar Read more…

By Oliver Peckham

Research: A Survey of Numerical Methods Utilizing Mixed Precision Arithmetic

August 5, 2020

Within the past years, hardware vendors have started designing low precision special function units in response to the demand of the machine learning community and their demand for high compute power in low precision for Read more…

By Hartwig Anzt and Jack Dongarra

Implement Photonic Tensor Cores for Machine Learning?

August 5, 2020

Researchers from George Washington University have reported an approach for building photonic tensor cores that leverages phase change photonic memory to implement a neural network (NN). Their novel architecture, reporte Read more…

By John Russell

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing the pinnacle of HPE's HPC portfolio. After announcing its i Read more…

By Tiffany Trader

Machines, Connections, Data, and Especially People: OAC Acting Director Amy Friedlander Charts Office’s Blueprint for Innovation

August 3, 2020

The path to innovation in cyberinfrastructure (CI) will require continued focus on building HPC systems and secure connections between them, in addition to the increasingly important goals of data best practices and work Read more…

By Ken Chiacchia, Pittsburgh Supercomputing Center/XSEDE

AWS Solution Channel

AWS announces the release of AWS ParallelCluster 2.8.0

AWS ParallelCluster is a fully supported and maintained open source cluster management tool that makes it easy for scientists, researchers, and IT administrators to deploy and manage High Performance Computing (HPC) clusters in the AWS cloud. Read more…

Intel® HPC + AI Pavilion

Supercomputing the Pandemic: Scientific Community Tackles COVID-19 from Multiple Perspectives

Since their inception, supercomputers have taken on the biggest, most complex, and most data-intensive computing challenges—from confirming Einstein’s theories about gravitational waves to predicting the impacts of climate change. Read more…

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated, analysts said the acquisition would cement Nvidia’s stat Read more…

By George Leopold

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

Machines, Connections, Data, and Especially People: OAC Acting Director Amy Friedlander Charts Office’s Blueprint for Innovation

August 3, 2020

The path to innovation in cyberinfrastructure (CI) will require continued focus on building HPC systems and secure connections between them, in addition to the Read more…

By Ken Chiacchia, Pittsburgh Supercomputing Center/XSEDE

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

Intel’s 7nm Slip Raises Questions About Ponte Vecchio GPU, Aurora Supercomputer

July 30, 2020

During its second-quarter earnings call, Intel announced a one-year delay of its 7nm process technology, which it says it will create an approximate six-month shift for its CPU product timing relative to prior expectations. The primary issue is a defect mode in the 7nm process that resulted in yield degradation... Read more…

By Tiffany Trader

PEARC20 Plenary Introduces Five Upcoming NSF-Funded HPC Systems

July 30, 2020

Five new HPC systems—three National Science Foundation-funded “Capacity” systems and two “Innovative Prototype/Testbed” systems—will be coming onlin Read more…

By Ken Chiacchia, Pittsburgh Supercomputing Center/XSEDE

Nvidia Dominates Latest MLPerf Training Benchmark Results

July 29, 2020

MLPerf.org released its third round of training benchmark (v0.7) results today and Nvidia again dominated, claiming 16 new records. Meanwhile, Google provided e Read more…

By John Russell

$39 Billion Worldwide HPC Market Faces 3.7% COVID-related Drop in 2020

July 29, 2020

Global HPC market revenue reached $39 billion in 2019, growing a healthy 8.2 percent over 2018, according to the latest analysis from Intersect360 Research. A 3 Read more…

By Tiffany Trader

Agenting Change: PEARC20 Keynote Encourages Cultural Change to Make Tech Better, More Diverse

July 29, 2020

The tech world will need to become more diverse if it is to thrive and survive, said Cherri Pancake, director of the Northwest Alliance for Computational Resear Read more…

By Ken Chiacchia, Pittsburgh Supercomputing Center/XSEDE

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms

July 28, 2020

Doctors and medical researchers have struggled to pinpoint – let alone explain – the deluge of symptoms induced by COVID-19 infections in patients, and what Read more…

By Oliver Peckham

Intel’s 7nm Slip Raises Questions About Ponte Vecchio GPU, Aurora Supercomputer

July 30, 2020

During its second-quarter earnings call, Intel announced a one-year delay of its 7nm process technology, which it says it will create an approximate six-month shift for its CPU product timing relative to prior expectations. The primary issue is a defect mode in the 7nm process that resulted in yield degradation... Read more…

By Tiffany Trader

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

Leading Solution Providers

Contributors

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Joliot-Curie Supercomputer Used to Build First Full, High-Fidelity Aircraft Engine Simulation

July 14, 2020

When industrial designers plan the design of a new element of a vehicle’s propulsion or exterior, they typically use fluid dynamics to optimize airflow and in Read more…

By Oliver Peckham

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

John Martinis Reportedly Leaves Google Quantum Effort

April 21, 2020

John Martinis, who led Google’s quantum computing effort since establishing its quantum hardware group in 2014, has left Google after being moved into an advi Read more…

By John Russell

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

Japan’s Fugaku Tops Global Supercomputing Rankings

June 22, 2020

A new Top500 champ was unveiled today. Supercomputer Fugaku, the pride of Japan and the namesake of Mount Fuji, vaulted to the top of the 55th edition of the To Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This