Visit additional Tabor Communication Publications
January 24, 2013
CHICAGO, Ill., Jan. 24 – Globus Online announces today that it is the first service to pass the XSEDE Operations Acceptance Test. Globus Online has been accepted for production deployment, making it an official software service on XSEDE, the world’s most advanced, powerful, and robust collection of integrated advanced digital resources. Globus Online is a file transfer and synchronization service that is specifically geared to the big data needs of the research community, with Web, command line and REST interfaces—and it is now a recommended and supported service for all XSEDE users.
This milestone builds on the long-standing relationship between XSEDE and Globus Online. XSEDE researchers have used the robust file transfer services of Globus Online since the XSEDE project’s inception in July 2011, and prior to that on TeraGrid, XSEDE’s predecessor.
“Core to our mission is lowering the technological barriers to accessing and using advanced computing resources,” said John Towns, principal investigator for XSEDE. “Globus Online supports our mission by providing a simple, powerful method for moving data to, from, and among XSEDE resources. We are pleased to integrate this powerful, yet easy-to-use file transfer service more formally into the XSEDE ecosystem.”
The combined power of XSEDE’s resources and Globus Online’s data transfer service can result in dramatic outcomes for researchers. Brian O’Shea, computational astrophysicist at Michigan State University, performed analysis of large volumes of data on Kraken—a powerful supercomputer and a key XSEDE resource—with the goal of understanding how galaxies in the early Universe grow and evolve, in several statistically dissimilar environments. O’Shea used Globus Online to move more than 250 terabytes of data to Kraken in just 14 days, at an average data rate of 1.6 gigabits per second [Gbps] with peaks as high as 2.5 Gbps. “We often encounter issues when moving large volumes of data, and it can slow down the pace of our work,” said O’Shea. “Globus Online has proven to be a very useful service that greatly simplified file transfer tasks and reduced our IT burden, so we can spend more time on research.”
Whether it’s moving a small number of very large (even terabyte-sized) files or a very large number of small files, Globus Online is software-as-a-service that makes it much simpler for researchers to transfer and synchronize large volumes of data between systems. Using their XSEDE User Portal credentials, researchers can access Globus Online’s simple web interface to move data between any two XSEDE resources because all XSEDE resources are already configured as Globus Online endpoints. For cases where a user needs to move files between an XSEDE resource and a personal machine, Globus Online makes it possible with just a few mouse clicks, and without the typical difficulties of installing and configuring specialized software.
“We are thrilled to have Globus Online recognized as an approved XSEDE service,” said Ian Foster, Director of the Computation Institute, a joint institute of the University of Chicago and Argonne National Laboratory, and the Globus Online project lead along with Steve Tuecke. “We see steady growth in Globus Online and have worked closely with the XSEDE team to provide a seamless user experience for researchers moving big data sets among XSEDE resources.”
About Globus Online
Globus Online is a fast, reliable, high-performance service for secure data transfer, sharing, and synchronization. Designed specifically for researchers, Globus Online provides “fire-and-forget” file transfer capabilities that simplify the process of moving big data between any two resources, such as a supercomputing facility, cloud resource, campus cluster, lab server, desktop or laptop. Globus Online is recommended by dozens of research institutions and high-performance computing facilities worldwide. Globus Online is an initiative by the Computation Institute at the University of Chicago and Argonne National Laboratory, and is supported in part by funding from the Department of Energy, the National Science Foundation, and the National Institutes of Health.
Extreme Science and Engineering Discovery Environment (XSEDE) allows researchers open access to the power of supercomputers, advanced computational tools and digital resources and services directly from their desktops.
XSEDE links computers, data and people from around the world to establish a single, virtual system that scientists can interactively use to conduct research. Supported by the National Science Foundation (NSF), XSEDE aims to be the most advanced, powerful, and robust collection of integrated advanced digital resources and services in the world.
Source: Globus Online
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.