Mining Data with ‘Robbie the Robot’

By Amy Page Christiansen

December 16, 2005

A new resident of the Math Sciences Building is supporting the sophisticated data-storage needs of researchers at Purdue University and helping to establish the institution among the nation's supercomputing elite.

“Robbie the Robot,” named for the mechanical star of the 1950s sci-fi classic “Forbidden Planet,” is a cutting-edge, automated storage and retrieval system that will enable vast amounts of data to be seamlessly archived and quickly located for researchers' use.

The $1 million robot system has the capacity to store up to 1 petabyte of data.

“To put this in context, one petabyte equals 1,000 terabytes,” says Dwight McKay, director of systems engineering with Information Technology at Purdue (ITaP). “The U.S. Library of Congress contains approximately 10 terabytes of data, and our capacity is about 100 times that amount.

“That is substantial considering all the Internet content in existence is estimated to be 8 petabytes. This system brings Purdue up to the kind of data storage that other large, high-performance computing centers have.”

This initiative is part of ITaP's ongoing efforts to upgrade high-performance computing capabilities.

“We've been actively expanding our resources to attract researchers to Purdue, and this robot system is one of the tools to help us become competitive at the national level of supercomputing,” McKay says.

This is especially needed to support the new Cyber Center for supercomputing that was announced last summer as part of Discovery Park, the university's multidisciplinary research center.

“Researchers are coming to Purdue and bringing their very large data sets with them,” says Mike Marsh, senior engineer in the Rosen Center for Advanced Computing. “With this system, we have the ability to capture that data in our library and have it automatically available to them, and that's a big advantage.”

The robot also will enable more researchers to move toward mining data collected from multiple, sophisticated simulations. Some of the current research that will benefit includes climatology modeling and structural biology.

“These researchers have large computations and simulations, as well as large data sets,” McKay says. “This is the tool they need to be effective in doing this kind of science.”

McKay and his team monitor researchers' use of and needs for the system, which is in the testing phase and set to be operational in the spring. Through a user group, ITaP is able to gather feedback and adjust to the needs of researchers.

“We're a partnership with researchers,” McKay says. “We are familiar with their labs so we see how we can help and what kinds of resources they need.”

The tape robot device is part of a hierarchical storage-management system that consists of a server computer attached to the robotic tape mechanism, all within a 6-by-20-foot space. It uses extremely fast, fiber-channel technology. The software on the server conveys to users that their data is online and available when they request it.

Behind the scenes and within about 10 seconds, the robotic arm – which resembles those used in automobile manufacturing – moves along a hallway of shelves storing data tapes to select and then load the requested data into the computer for researchers to access. Data that isn't being requested can be moved onto tapes for storage until it's needed. The entire process is lightning fast and carefully controlled by sophisticated sensors, Marsh says.

“Robbie” represents the third generation of such robots on campus.

“We've had similar, but much smaller, systems in the past,” McKay says. “In this generation, we've added a significant piece of hardware with very large storage capability for archiving data and supporting data-intensive science.”

The previous tape-storage robot – in use at Purdue since 1996 – could hold up to 60 terabytes of data on about 960 tapes with 15 tape drives that could each transfer 11 megabytes of data per second.

“Robbie” represents a quantum leap ahead, McKay says.

The new robot – an ADIC model using LTO-2 tape drives – has 5,400 tape slots and 36 drives that can each transfer 40 megabytes of data per second.

This type of system can be found at the Central Intelligence Agency, the Social Security Administration, national research labs and very large insurance companies – not many universities

“These systems are expensive, physically large and require high-level staff to operate,” Marsh says. “This robot is putting Purdue ahead of the curve.”

The system can easily be doubled in size to two petabytes with additional tape drives. It also can accommodate 11 different models of tape drives from four different manufacturers, and many of the parts are engineered to be “hot-swappable” and redundant, which makes the system more flexible and able to stay online during maintenance.

“We can replace failed power supplies or tape drives while the library continues to run, which keeps the system available to researchers at all times,” Marsh says.

The system operates 24 hours a day, providing continuous backup and automatic downloading to researchers. The old robot system will be online for about a year while its data is migrated to the new system.

Marsh says the new system also provides more efficiency in meeting government requirements for storage of sensitive data.

“It's critical that data be backed up in a separate location in case of natural disaster,” he says. “With this system, it will be possible to locate another robot system elsewhere, like Indianapolis, and duplicate critical data in that remote location.”

While “Robbie” is putting Purdue in the upper echelon of supercomputing, tape-storage needs will continue to become more sophisticated.

“One exabyte is 1,000 petabytes, and it's estimated that a 5-exabyte library would be able to store all the words ever uttered by every person who has ever lived since the origin of our species,” Marsh says. “We should have libraries capable of storing an exabyte of data within the next several years.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire