The Answer to Your Backup Needs is Grid-Based

By By Dennis Barker, GRIDtoday

July 28, 2008

Backup might not be everyone’s favorite topic, but it is the one IT operation that, if screwed up, can mean a stay at that place where an orange jumpsuit is part of the welcome package. It has to be done — and done right. And with the amount of data that has to be backed up going in only one direction, it’s a good idea to have a backup system that can grow along. Wouldn’t it be nice if it were also budget-sensitive, extremely reliable, fast, accessible, and easy to implement?

That’s the promise behind ExaGrid’s disk-based backup system, which the company recently fortified with features aimed at multi-location datacenters.

Essentially, ExaGrid builds modular storage servers around arrays of SATA disks that plug into a grid architecture. As the company’s name suggests, scalability always has been part of the design. An ExaGrid server (in flavors from 1TB to 5TB) can be plugged into the grid as needed, and “through our software, it virtualizes into the existing system,” says Bill Andrews, ExaGrid president and CEO. “When you add one of our boxes, you’re not just adding more disk, you’re adding storage servers. You’re adding more processing power and memory, which you need in order to scale up and handle more data. And all those resources virtualize into one large system. To the backup server, it’s just more capacity, and without any disruption.”

ExaGrid does a couple other things that make its data-handling approach unique and that yield direct benefits to users. The company’s process uses byte-level de-duplication to reduce significantly the amount of data that has to be stored. This means the system can detect changes to a file at the byte level and, after backing up the original, saves only changed or new data. Instead of backing up Homer’s spreadsheet every day, day after day after day, the system saves only alterations. Studies show that most user files seldom change after a certain age. Avoiding unnecessary duplication can reduce the amount of storage required by 20:1, according to both ExaGrid and independent analysts. ExaGrid also compresses the data, further reducing the amount of platter needed for backup. For organizations sending data to remote sites, smaller backup files also means faster transmission across a WAN.

“Average compression is about 2:1, so a 1TB backup file would be stored as 500GB,” Andrews says. “Previous backup files are then kept as the byte-level changes only, which averages to about 2 percent of the data.”

“We’re trying to make backup not only better but faster,” Andrews says. “Customers tell us that we’ve reduced their backup time by at least 30 percent, some much higher. We let the backup run to disk and de-dupe afterward. Doing it on the fly slows down the backup process, and you can’t have your backups running in the morning when people come to work. We also keep the latest backup in its complete form, in case you need it quickly. You don’t have to put a zillion blocks together. Nearly all restores come from the latest version.” ExaGrid says its typical restore throughput is “up to 2.6 terabytes per hour.”

Clunk Goes the Tape

The company has been a proponent of disk-based backup since it started in 2002, when tape was still de facto but losing its glow. The advantages of disk over tape — speed and reliability among them — were becoming more and more apparent, but the rap against disk was price. SATA drives were initially about 40 times higher than tape. Today, that differential has come down to about 10, and ExaGrid’s technology equalizes the economics, Andrews says. “With compression, you use less space, bringing the price of disk down to about only five times more than tape. Add byte-level de-duplication and the price is about equivalent to tape.” The thought of never having to track down a tape, only to find out it is defective, ought to enter into the equation, too.  

However, ExaGrid is not necessarily out to obliterate tape. “You can set up our system to copy your nightly or weekly backups through the backup server to tape,” Andrews says. “About half our customers make tape backup, and about half are disk-only. Although we see the trend moving away. Some of our customers are shutting off tape and adding another one of our systems.”

The ExaGrid Disk-based Backup System is a regular NAS unit made up of RAID-6 drives with a hot spare, Xeon dual-core processors, Gigabit Ethernet connections and management intelligence. Load balancing is built in. You can buy server “building blocks” in sizes of 1, 2, 3, 4, or 5TB, and add them to the grid in any size as demand grows. When models with faster processors or greater densities become available, they too can be added.

One of the key features of ExaGrid’s system is that it works seamlessly with the backup systems people are used to, including Symantec Backup Exec and NetBackup, CA Arcserve, EMC Networker and CommVault Galaxy. The company says no changes are required to your current setup. “You would continue to do your backup jobs as you do them today,” Andrews says. “ExaGrid sits behind your current backup server as a storage repository.”

Multi-Site Protection

ExaGrid’s target user has at least a terabyte of data and up to 60TB or so to contend with, Andrews says, and last week the company announced enhancements designed for organizations with datacenters in multiple locations. “We’re now giving customers multi-site backup capabilities that will let them cross-protect up to nine locations,” Andrews says. “Let’s say you have major offices in San Francisco, Dallas, New York, Chicago and Boston. You can now cross-protect across all of them. You can backup data locally and then send a copy to any of those other sites for backup. You can point the data at any other locations so that you can recover from a disaster. And because we’re only moving byte-level changes to off-site locations, you’re shipping only a fraction of the data across the WAN.” The company also added functions that allow for better monitoring of backup jobs.

Report from the Field

A fairly typical ExaGrid user, MemorialCare Medical Centers runs six hospitals in Los Angeles and Orange County, Calif. With patient and business data to protect, backups were taking up to 18 hours a day and much of the IT staff’s time, and consumed up to 300 tapes a week, says Jorge Cepeda, network engineer. With the ExaGrid system, backup time was reduced to 8 hours and the process was “painless,” Cepeda says. MemorialCare plans to install ExaGrid systems at all its hospitals in order to replicate data for disaster recovery.

In a report issued earlier this year, Enterprise Strategy Group said its ESG Lab tests “confirmed that ExaGrid backup-to-disk solutions combine the benefits of high-density SATA drives, post-process data de-duplication and scalable grid architecture to provide a cost-effective, energy-efficient alternative to tape.” According to the report’s lead author, ESG analyst Claude Bouffard, “Organizations struggling with the cost, complexity, and risk associated with tape backups would be wise to consider the bottom-line savings that can be achieved with ExaGrid: faster backups, quicker and more reliable restores, lower risk, lower expenses … and last, but not least, a greener solution with optimized power and cooling.”

The Taneja Group issued a statement as part of ExaGrid’s announcement last week in which senior analyst Jeff Boles said that in Taneja’s lab tests, “ExaGrid easily performed, scaled, and de-duplicated right out-of-the-box. ExaGrid’s scalability makes their performance claims even more compelling.” In a 2007 study, Taneja Group analysts recommended that organizations seeking a disk-based solution (“no longer optional,” they said) that delivers “ROI, reliability, flexibility, and demonstrable ease of use” should start with ExaGrid.

“I doubt we can ever make backup fun,” Andrews says. “but we will keep trying to make it better.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Mira Supercomputer Enables Cancer Research Breakthrough

November 11, 2019

Dynamic partial-wave spectroscopic (PWS) microscopy allows researchers to observe intracellular structures as small as 20 nanometers – smaller than those visible by optical microscopes – in three dimensions at a mill Read more…

By Staff report

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quantum annealing) – ion trap technology is edging into the QC Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. That’s the conclusion drawn by the scientists and researcher Read more…

By Jan Rowell

What’s New in HPC Research: Cosmic Magnetism, Cryptanalysis, Car Navigation & More

November 8, 2019

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Machine Learning Fuels a Booming HPC Market

November 7, 2019

Enterprise infrastructure investments for training machine learning models have grown more than 50 percent annually over the past two years, and are expected to shortly surpass $10 billion, according to a new market fore Read more…

By George Leopold

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Atom by Atom, Supercomputers Shed Light on Alloys

November 7, 2019

Alloys are at the heart of human civilization, but developing alloys in the Information Age is much different than it was in the Bronze Age. Trial-by-error smelting has given way to the use of high-performance computing Read more…

By Oliver Peckham

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quant Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. Th Read more…

By Jan Rowell

MLPerf Releases First Inference Benchmark Results; Nvidia Touts its Showing

November 6, 2019

MLPerf.org, the young AI-benchmarking consortium, today issued the first round of results for its inference test suite. Among organizations with submissions wer Read more…

By John Russell

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed ins Read more…

By Tiffany Trader

Nvidia Launches Credit Card-Sized 21 TOPS Jetson System for Edge Devices

November 6, 2019

Nvidia has launched a new addition to its Jetson product line: a credit card-sized (70x45mm) form factor delivering up to 21 trillion operations/second (TOPS) o Read more…

By Doug Black

In Memoriam: Steve Tuecke, Globus Co-founder

November 4, 2019

HPCwire is deeply saddened to report that Steve Tuecke, longtime scientist at Argonne National Lab and University of Chicago, has passed away at age 52. Tuecke Read more…

By Tiffany Trader

Spending Spree: Hyperscalers Bought $57B of IT in 2018, $10B+ by Google – But Is Cloud on Horizon?

October 31, 2019

Hyperscalers are the masters of the IT universe, gravitational centers of increasing pull in the emerging age of data-driven compute and AI.  In the high-stake Read more…

By Doug Black

Cray Debuts ClusterStor E1000 Finishing Remake of Portfolio for ‘Exascale Era’

October 30, 2019

Cray, now owned by HPE, today introduced the ClusterStor E1000 storage platform, which leverages Cray software and mixes hard disk drives (HDD) and flash memory Read more…

By John Russell

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated mo Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This