Visit additional Tabor Communication Publications
November 04, 2005
As part of its resource contributions to the NSF-funded TeraGrid, Indiana University is now making tape-based storage available to users of the TeraGrid via Indiana University's Massive Data Storage System (MDSS), which uses the High Performance Storage System (HPSS) software. Researchers with TeraGrid allocations may store up to one terabyte of data within IU's HPSS system, and data will remain available for one year past the end of the user's allocation.
HPSS is software that manages large volumes of data on disk and in robotic tape libraries, aggregating the capacities of many physical storage devices into a single, virtually infinite file system. HPSS was developed beginning in 1992 as a collaboration between IBM Global Services along with five Department of Energy laboratories to address the growing challenges of capacity, I/O and functionality in massive storage systems. The HPSS architecture enables superb scalability of transfer rates and data capacity, meeting the requirements of national and international academic institutions, government agencies, and other organizations that need to store in a single namespace the largest sets of data currently being collected.
The HPSS system at IU, with a total capacity of more than 2.2 petabytes, is the first and only HPSS installation to implement distributed data movers. Indiana University's installation of HPSS is very unusual in that IU maintains geographically separate data silos in Bloomington and Indianapolis, IN. Users who store data in IU's HPSS system have the option of keeping two copies -- one in Bloomington, one in Indianapolis -- ensuring that data are stored reliably even in the event of the destruction of one of the machine rooms.
Access to store and retrieve files from IU's HPSS system is now available from every TeraGrid site via the Hierarchical Storage Interface (HSI). IU's initial usage policy for this resource will be to provide, by default, up to one terabyte of storage, representing 500 GBs of data safely stored in multiple locations, to TeraGrid users. The availability of IU's HPSS system will be of particular value to the "TeraGrid Wide" strategy. This is the TeraGrid project's goal to make the TeraGrid widely valuable to a large portion of the nation's research community. Advanced users are expected tofind great value in the massive storage available in IU's HPSS system, but perhaps more importantly, availability of one terabyte of storage will be of value to many researchers throughout the nation who do not have access at their own home institution to a sophisticated archival data storage system.
It is important to note that researchers do not have to compute on the TeraGrid to take advantage of the data storage capabilities provided, via the TeraGrid, from Indiana University. Researchers may request an allocation through the Developmental Allocation Committee, receive a small allocation of computing time and simply use their TeraGrid credentials for access to the IU HPSS system.
For information on how to get an account on the TeraGrid, go to http://kb.iu.edu/data/anql.html. For specific detailed instructions on how to access Indiana University's HPSS system via the TeraGrid, see http://kb.iu.edu/data/arux.html. For additional information about HPSS, visit the home page of the HPSS Collaboration here http://www.hpss- collaboration.org. For information about the TeraGrid, see http://www.teragrid.org. For more about high performance computing at IU, visit http://uits.iu.edu/scripts/ose.cgi?amee.help. For more information about IU's contributions to the TeraGrid, see http://iu.teragrid.org/index.html.
About the TeraGrid: IU is one of eight resource partners contributing to the TeraGrid, along with the National Center for Supercomputing Applications, Oak Ridge National Laboratory, Pittsburgh Supercomputing Center, Purdue University, San Diego Supercomputer Center, Texas Advanced Computing Center and Argonne National Laboratory. The TeraGrid combines computational, storage, network and visualization resources from these partner sites to create a tremendous integrated resource to support scientific research.
The TeraGrid was launched by the National Science Foundation in 2001. Indiana University was added as a resource partner in 2003. IU plans to continue through their participation in the TeraGrid what they have long done for local researchers, to continue to focus on data-centric science and support for the Life Sciences.
Scott McCaulay is Indiana University's TeraGrid Site Lead. Thomas Hacker is associate director for reseach and academic computing at Indiana University. Andrew Arenson is manager of the Distributed Storage Services Group at Indiana University.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.