Visit additional Tabor Communication Publications
December 16, 2005
A new resident of the Math Sciences Building is supporting the sophisticated data-storage needs of researchers at Purdue University and helping to establish the institution among the nation's supercomputing elite.
"Robbie the Robot," named for the mechanical star of the 1950s sci-fi classic "Forbidden Planet," is a cutting-edge, automated storage and retrieval system that will enable vast amounts of data to be seamlessly archived and quickly located for researchers' use.
The $1 million robot system has the capacity to store up to 1 petabyte of data.
"To put this in context, one petabyte equals 1,000 terabytes," says Dwight McKay, director of systems engineering with Information Technology at Purdue (ITaP). "The U.S. Library of Congress contains approximately 10 terabytes of data, and our capacity is about 100 times that amount.
"That is substantial considering all the Internet content in existence is estimated to be 8 petabytes. This system brings Purdue up to the kind of data storage that other large, high-performance computing centers have."
This initiative is part of ITaP's ongoing efforts to upgrade high-performance computing capabilities.
"We've been actively expanding our resources to attract researchers to Purdue, and this robot system is one of the tools to help us become competitive at the national level of supercomputing," McKay says.
This is especially needed to support the new Cyber Center for supercomputing that was announced last summer as part of Discovery Park, the university's multidisciplinary research center.
"Researchers are coming to Purdue and bringing their very large data sets with them," says Mike Marsh, senior engineer in the Rosen Center for Advanced Computing. "With this system, we have the ability to capture that data in our library and have it automatically available to them, and that's a big advantage."
The robot also will enable more researchers to move toward mining data collected from multiple, sophisticated simulations. Some of the current research that will benefit includes climatology modeling and structural biology.
"These researchers have large computations and simulations, as well as large data sets," McKay says. "This is the tool they need to be effective in doing this kind of science."
McKay and his team monitor researchers' use of and needs for the system, which is in the testing phase and set to be operational in the spring. Through a user group, ITaP is able to gather feedback and adjust to the needs of researchers.
"We're a partnership with researchers," McKay says. "We are familiar with their labs so we see how we can help and what kinds of resources they need."
The tape robot device is part of a hierarchical storage-management system that consists of a server computer attached to the robotic tape mechanism, all within a 6-by-20-foot space. It uses extremely fast, fiber-channel technology. The software on the server conveys to users that their data is online and available when they request it.
Behind the scenes and within about 10 seconds, the robotic arm - which resembles those used in automobile manufacturing - moves along a hallway of shelves storing data tapes to select and then load the requested data into the computer for researchers to access. Data that isn't being requested can be moved onto tapes for storage until it's needed. The entire process is lightning fast and carefully controlled by sophisticated sensors, Marsh says.
"Robbie" represents the third generation of such robots on campus.
"We've had similar, but much smaller, systems in the past," McKay says. "In this generation, we've added a significant piece of hardware with very large storage capability for archiving data and supporting data-intensive science."
The previous tape-storage robot - in use at Purdue since 1996 - could hold up to 60 terabytes of data on about 960 tapes with 15 tape drives that could each transfer 11 megabytes of data per second.
"Robbie" represents a quantum leap ahead, McKay says.
The new robot - an ADIC model using LTO-2 tape drives - has 5,400 tape slots and 36 drives that can each transfer 40 megabytes of data per second.
This type of system can be found at the Central Intelligence Agency, the Social Security Administration, national research labs and very large insurance companies - not many universities
"These systems are expensive, physically large and require high-level staff to operate," Marsh says. "This robot is putting Purdue ahead of the curve."
The system can easily be doubled in size to two petabytes with additional tape drives. It also can accommodate 11 different models of tape drives from four different manufacturers, and many of the parts are engineered to be "hot-swappable" and redundant, which makes the system more flexible and able to stay online during maintenance.
"We can replace failed power supplies or tape drives while the library continues to run, which keeps the system available to researchers at all times," Marsh says.
The system operates 24 hours a day, providing continuous backup and automatic downloading to researchers. The old robot system will be online for about a year while its data is migrated to the new system.
Marsh says the new system also provides more efficiency in meeting government requirements for storage of sensitive data.
"It's critical that data be backed up in a separate location in case of natural disaster," he says. "With this system, it will be possible to locate another robot system elsewhere, like Indianapolis, and duplicate critical data in that remote location."
While "Robbie" is putting Purdue in the upper echelon of supercomputing, tape-storage needs will continue to become more sophisticated.
"One exabyte is 1,000 petabytes, and it's estimated that a 5-exabyte library would be able to store all the words ever uttered by every person who has ever lived since the origin of our species," Marsh says. "We should have libraries capable of storing an exabyte of data within the next several years."
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.