The Answer to Your Backup Needs is Grid-Based

By By Dennis Barker, GRIDtoday

July 28, 2008

Backup might not be everyone’s favorite topic, but it is the one IT operation that, if screwed up, can mean a stay at that place where an orange jumpsuit is part of the welcome package. It has to be done — and done right. And with the amount of data that has to be backed up going in only one direction, it’s a good idea to have a backup system that can grow along. Wouldn’t it be nice if it were also budget-sensitive, extremely reliable, fast, accessible, and easy to implement?

That’s the promise behind ExaGrid’s disk-based backup system, which the company recently fortified with features aimed at multi-location datacenters.

Essentially, ExaGrid builds modular storage servers around arrays of SATA disks that plug into a grid architecture. As the company’s name suggests, scalability always has been part of the design. An ExaGrid server (in flavors from 1TB to 5TB) can be plugged into the grid as needed, and “through our software, it virtualizes into the existing system,” says Bill Andrews, ExaGrid president and CEO. “When you add one of our boxes, you’re not just adding more disk, you’re adding storage servers. You’re adding more processing power and memory, which you need in order to scale up and handle more data. And all those resources virtualize into one large system. To the backup server, it’s just more capacity, and without any disruption.”

ExaGrid does a couple other things that make its data-handling approach unique and that yield direct benefits to users. The company’s process uses byte-level de-duplication to reduce significantly the amount of data that has to be stored. This means the system can detect changes to a file at the byte level and, after backing up the original, saves only changed or new data. Instead of backing up Homer’s spreadsheet every day, day after day after day, the system saves only alterations. Studies show that most user files seldom change after a certain age. Avoiding unnecessary duplication can reduce the amount of storage required by 20:1, according to both ExaGrid and independent analysts. ExaGrid also compresses the data, further reducing the amount of platter needed for backup. For organizations sending data to remote sites, smaller backup files also means faster transmission across a WAN.

“Average compression is about 2:1, so a 1TB backup file would be stored as 500GB,” Andrews says. “Previous backup files are then kept as the byte-level changes only, which averages to about 2 percent of the data.”

“We’re trying to make backup not only better but faster,” Andrews says. “Customers tell us that we’ve reduced their backup time by at least 30 percent, some much higher. We let the backup run to disk and de-dupe afterward. Doing it on the fly slows down the backup process, and you can’t have your backups running in the morning when people come to work. We also keep the latest backup in its complete form, in case you need it quickly. You don’t have to put a zillion blocks together. Nearly all restores come from the latest version.” ExaGrid says its typical restore throughput is “up to 2.6 terabytes per hour.”

Clunk Goes the Tape

The company has been a proponent of disk-based backup since it started in 2002, when tape was still de facto but losing its glow. The advantages of disk over tape — speed and reliability among them — were becoming more and more apparent, but the rap against disk was price. SATA drives were initially about 40 times higher than tape. Today, that differential has come down to about 10, and ExaGrid’s technology equalizes the economics, Andrews says. “With compression, you use less space, bringing the price of disk down to about only five times more than tape. Add byte-level de-duplication and the price is about equivalent to tape.” The thought of never having to track down a tape, only to find out it is defective, ought to enter into the equation, too.  

However, ExaGrid is not necessarily out to obliterate tape. “You can set up our system to copy your nightly or weekly backups through the backup server to tape,” Andrews says. “About half our customers make tape backup, and about half are disk-only. Although we see the trend moving away. Some of our customers are shutting off tape and adding another one of our systems.”

The ExaGrid Disk-based Backup System is a regular NAS unit made up of RAID-6 drives with a hot spare, Xeon dual-core processors, Gigabit Ethernet connections and management intelligence. Load balancing is built in. You can buy server “building blocks” in sizes of 1, 2, 3, 4, or 5TB, and add them to the grid in any size as demand grows. When models with faster processors or greater densities become available, they too can be added.

One of the key features of ExaGrid’s system is that it works seamlessly with the backup systems people are used to, including Symantec Backup Exec and NetBackup, CA Arcserve, EMC Networker and CommVault Galaxy. The company says no changes are required to your current setup. “You would continue to do your backup jobs as you do them today,” Andrews says. “ExaGrid sits behind your current backup server as a storage repository.”

Multi-Site Protection

ExaGrid’s target user has at least a terabyte of data and up to 60TB or so to contend with, Andrews says, and last week the company announced enhancements designed for organizations with datacenters in multiple locations. “We’re now giving customers multi-site backup capabilities that will let them cross-protect up to nine locations,” Andrews says. “Let’s say you have major offices in San Francisco, Dallas, New York, Chicago and Boston. You can now cross-protect across all of them. You can backup data locally and then send a copy to any of those other sites for backup. You can point the data at any other locations so that you can recover from a disaster. And because we’re only moving byte-level changes to off-site locations, you’re shipping only a fraction of the data across the WAN.” The company also added functions that allow for better monitoring of backup jobs.

Report from the Field

A fairly typical ExaGrid user, MemorialCare Medical Centers runs six hospitals in Los Angeles and Orange County, Calif. With patient and business data to protect, backups were taking up to 18 hours a day and much of the IT staff’s time, and consumed up to 300 tapes a week, says Jorge Cepeda, network engineer. With the ExaGrid system, backup time was reduced to 8 hours and the process was “painless,” Cepeda says. MemorialCare plans to install ExaGrid systems at all its hospitals in order to replicate data for disaster recovery.

In a report issued earlier this year, Enterprise Strategy Group said its ESG Lab tests “confirmed that ExaGrid backup-to-disk solutions combine the benefits of high-density SATA drives, post-process data de-duplication and scalable grid architecture to provide a cost-effective, energy-efficient alternative to tape.” According to the report’s lead author, ESG analyst Claude Bouffard, “Organizations struggling with the cost, complexity, and risk associated with tape backups would be wise to consider the bottom-line savings that can be achieved with ExaGrid: faster backups, quicker and more reliable restores, lower risk, lower expenses … and last, but not least, a greener solution with optimized power and cooling.”

The Taneja Group issued a statement as part of ExaGrid’s announcement last week in which senior analyst Jeff Boles said that in Taneja’s lab tests, “ExaGrid easily performed, scaled, and de-duplicated right out-of-the-box. ExaGrid’s scalability makes their performance claims even more compelling.” In a 2007 study, Taneja Group analysts recommended that organizations seeking a disk-based solution (“no longer optional,” they said) that delivers “ROI, reliability, flexibility, and demonstrable ease of use” should start with ExaGrid.

“I doubt we can ever make backup fun,” Andrews says. “but we will keep trying to make it better.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire