Visit additional Tabor Communication Publications
September 03, 2009
BATON ROUGE, La., Sept. 3 -- LSU Professor Tevfik Kosar received a half-million dollar grant from the National Science Foundation to support his work on Stork Data Scheduler, an innovative computing tool that helps researchers access and transfer large data sets easily and efficiently.
The grant award provides Kosar, a professor with the LSU Department of Computer Science who holds a joint appointment with the LSU Center for Computation & Technology, or CCT, with funding for three years to further develop and enhance the Stork Data Scheduler.
As computational science applications expand and become increasingly complex, researchers using these applications are generating larger and larger amounts of data, sometimes up to hundreds of terabytes and even petabytes. Sharing, disseminating, and analyzing these large data sets is a growing challenge for researchers, who need to collaborate but are unable to move so much information quickly or effectively.
Even though many researchers now have access to regional high-speed, fiber optic networks such as the Louisiana Optical Network Initiative, or LONI, many users cannot obtain even a fraction of the theoretical speeds these networks promise because of data overload, which slows transmission and causes a bottleneck in computational performance and reliability.
Kosar's project, funded through the National Science Foundation's Strategic Technologies for Cyberinfrastructure Program, aims to ease data bottlenecks, which will improve high-performance computing systems' performance. Stork, so named because it delivers data, is a batch scheduler program that makes it easier for researchers to share, store and deliver data across these systems.
Using Stork, researchers can transfer very large data sets with only a single command, making it one of the most powerful data transfer tools available. Stork is compatible with advanced high-performance computing toolkits, and researchers can use the software to access the power of these large systems and use them more effectively.
"The Stork data scheduler makes a distinctive contribution to the computational research community because it focuses on planning, scheduling, monitoring and management of data," Kosar said. "Unlike existing approaches, Stork treats data resources and their related tasks as primary components of computational resources, not simply as side effects. This will lead to quicker and more effective collaboration among researchers."
Researchers consider the Stork Data Scheduler a highly transformative project because of its potential to dramatically change how scientists perform their research and to rapidly facilitate sharing of experience, raw data, and results. Future applications could rely on Stork to manage storage and data movement reliably and transparently across many systems, eliminating the unnecessary failure of distributed tasks.
The Stork team made the first version (Stork 1.0) available for download through the Stork project Web page, www.storkproject.org, in December 2008. Stork is open source, and users can download it for free.
Data storage and management is Kosar's research specialty at the University. In 2006, he received a $1 million grant from NSF to create advanced data archival, processing and visualization capabilities across the state through the PetaShare project (www.petashare.org).
Kosar received a National Science Foundation CAREER Award in January 2009 for his research addressing the problems of distributed data storage and transfer. Through his work on the CAREER project, titled "Data-aware Distributed Computing for Enabling Large-scale Collaborative Science," Kosar is developing the theory and foundations of new computing systems that manage data more effectively with automated processes. These processes enable scientists to spend more time focusing on their research questions and less time dealing with data. This project is funded for five years at $400,000.
Kosar's recent STORK grant expands on the models and algorithms created through his CAREER grant work, implementing them in a scheduling software program that will be available for production and distribution.
"Dr. Kosar consistently demonstrates creative and innovative work, and we are excited he has received funding to continue work on a project that stands to benefit other University researchers and the broader scientific community," said CCT Interim Director Stephen David Beck.
Source: LSU Center for Computation & Technology
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.