Visit additional Tabor Communication Publications
December 16, 2005
A new resident of the Math Sciences Building is supporting the sophisticated data-storage needs of researchers at Purdue University and helping to establish the institution among the nation's supercomputing elite.
"Robbie the Robot," named for the mechanical star of the 1950s sci-fi classic "Forbidden Planet," is a cutting-edge, automated storage and retrieval system that will enable vast amounts of data to be seamlessly archived and quickly located for researchers' use.
The $1 million robot system has the capacity to store up to 1 petabyte of data.
"To put this in context, one petabyte equals 1,000 terabytes," says Dwight McKay, director of systems engineering with Information Technology at Purdue (ITaP). "The U.S. Library of Congress contains approximately 10 terabytes of data, and our capacity is about 100 times that amount.
"That is substantial considering all the Internet content in existence is estimated to be 8 petabytes. This system brings Purdue up to the kind of data storage that other large, high-performance computing centers have."
This initiative is part of ITaP's ongoing efforts to upgrade high-performance computing capabilities.
"We've been actively expanding our resources to attract researchers to Purdue, and this robot system is one of the tools to help us become competitive at the national level of supercomputing," McKay says.
This is especially needed to support the new Cyber Center for supercomputing that was announced last summer as part of Discovery Park, the university's multidisciplinary research center.
"Researchers are coming to Purdue and bringing their very large data sets with them," says Mike Marsh, senior engineer in the Rosen Center for Advanced Computing. "With this system, we have the ability to capture that data in our library and have it automatically available to them, and that's a big advantage."
The robot also will enable more researchers to move toward mining data collected from multiple, sophisticated simulations. Some of the current research that will benefit includes climatology modeling and structural biology.
"These researchers have large computations and simulations, as well as large data sets," McKay says. "This is the tool they need to be effective in doing this kind of science."
McKay and his team monitor researchers' use of and needs for the system, which is in the testing phase and set to be operational in the spring. Through a user group, ITaP is able to gather feedback and adjust to the needs of researchers.
"We're a partnership with researchers," McKay says. "We are familiar with their labs so we see how we can help and what kinds of resources they need."
The tape robot device is part of a hierarchical storage-management system that consists of a server computer attached to the robotic tape mechanism, all within a 6-by-20-foot space. It uses extremely fast, fiber-channel technology. The software on the server conveys to users that their data is online and available when they request it.
Behind the scenes and within about 10 seconds, the robotic arm - which resembles those used in automobile manufacturing - moves along a hallway of shelves storing data tapes to select and then load the requested data into the computer for researchers to access. Data that isn't being requested can be moved onto tapes for storage until it's needed. The entire process is lightning fast and carefully controlled by sophisticated sensors, Marsh says.
"Robbie" represents the third generation of such robots on campus.
"We've had similar, but much smaller, systems in the past," McKay says. "In this generation, we've added a significant piece of hardware with very large storage capability for archiving data and supporting data-intensive science."
The previous tape-storage robot - in use at Purdue since 1996 - could hold up to 60 terabytes of data on about 960 tapes with 15 tape drives that could each transfer 11 megabytes of data per second.
"Robbie" represents a quantum leap ahead, McKay says.
The new robot - an ADIC model using LTO-2 tape drives - has 5,400 tape slots and 36 drives that can each transfer 40 megabytes of data per second.
This type of system can be found at the Central Intelligence Agency, the Social Security Administration, national research labs and very large insurance companies - not many universities
"These systems are expensive, physically large and require high-level staff to operate," Marsh says. "This robot is putting Purdue ahead of the curve."
The system can easily be doubled in size to two petabytes with additional tape drives. It also can accommodate 11 different models of tape drives from four different manufacturers, and many of the parts are engineered to be "hot-swappable" and redundant, which makes the system more flexible and able to stay online during maintenance.
"We can replace failed power supplies or tape drives while the library continues to run, which keeps the system available to researchers at all times," Marsh says.
The system operates 24 hours a day, providing continuous backup and automatic downloading to researchers. The old robot system will be online for about a year while its data is migrated to the new system.
Marsh says the new system also provides more efficiency in meeting government requirements for storage of sensitive data.
"It's critical that data be backed up in a separate location in case of natural disaster," he says. "With this system, it will be possible to locate another robot system elsewhere, like Indianapolis, and duplicate critical data in that remote location."
While "Robbie" is putting Purdue in the upper echelon of supercomputing, tape-storage needs will continue to become more sophisticated.
"One exabyte is 1,000 petabytes, and it's estimated that a 5-exabyte library would be able to store all the words ever uttered by every person who has ever lived since the origin of our species," Marsh says. "We should have libraries capable of storing an exabyte of data within the next several years."
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
Jun 13, 2013 |
Titan, the Cray XK7 at the Oak Ridge National Lab that debuted last fall as the fastest supercomputer in the world with 17.59 petaflops of sustained computing power, will rely on its previous LINPACK test for the upcoming edition of the Top 500 list.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.