Visit additional Tabor Communication Publications
October 19, 2010
Non-profit organization launches with committed contributions to support practical development of HPC storage software technology
Oct. 19 -- Cray Inc., Data Direct Networks (DDN) Inc., Lawrence Livermore National Laboratory (LLNL) and Oak Ridge National Laboratory (ORNL) today announced the incorporation of Open Scalable File Systems, Inc. (OpenSFS) -- a California nonprofit mutual benefit corporation. OpenSFS will support the requirements of the data-intensive computing community by fostering the practical development of high performance computing (HPC) storage software technology.
OpenSFS is a technical organization focused on high-end, open-source file system technologies. The goals of the organization are to provide a forum for collaboration among entities deploying file systems on leading edge HPC systems, to communicate future requirements to the Lustre file system developers, and to support a release of the Lustre file system designed to meet these goals. The group's initial focus is the Lustre parallel file system, which supports many of the requirements of leadership class HPC simulation environments, has a diverse development community and is open-source software.
One of the great challenges HPC faces is the storage and management of the enormous quantities of data produced by ever more powerful HPC systems. By bringing together leaders from industry and the national labs, OpenSFS aims to improve current and future HPC Lustre deployments and accelerate the development of Lustre file system technologies that will advance scientific research and improve economic competitiveness.
Drawing upon the applied knowledge gained from involvement in successful open-source collaboration models, such as OpenFabrics and Open MPI, OpenSFS will be looking to broadly embrace and welcome the HPC and open-source storage communities through an open and worldwide participation model. OpenSFS will be headed by Norman Morse as CEO. Mr. Morse brings experience in the development and deployment of storage systems to support modern HPC environments. His experience includes seven years as data center manager responsible for all scientific computing and communications at Los Alamos National Laboratory, extensive experience in HPC business development in private industry and three years as a staffer for the Armed Services Committee of the U.S. House of Representatives overseeing the IT budget for the Department of Defense. Morse also has extensive experience in business development for Silicon Valley start-up companies.
"I am excited at the prospect of providing the HPC file system user community with a focal point that can centralize requirements and fund developers to satisfy those requirements," said Morse, CEO of OpenSFS. "Lustre was invented specifically to provide the high performance storage system demanded by HPC centers addressing grand challenge computing applications. Providing storage system support for the current HPC environment and extending that capability to the next generation HPC systems is an exciting challenge that the OpenSFS team is well positioned to meet."
"Lustre plays an important role for our customers who need high performance I/O capabilities to support some of the most scalable, highest performing production supercomputers on the planet," said Peter Ungaro, president and CEO of Cray. "Cray is committed to the continuing success of Lustre for Cray users and the broader Lustre community. We are excited to play a founding role with other leaders in the HPC community in launching OpenSFS, helping to ensure that Lustre will continue to evolve to meet these needs."
"For over 10 years, DataDirect Networks has been central to the world's most challenging HPC deployments. The establishment of OpenSFS sets file system development and collaboration on the right path, and ensures the long term liberty and viability of technology which is critical to advancing the state of applied research and data-intensive computing," said Alex Bouzari, CEO of DataDirect Networks. "We are extremely proud to serve as founding members of OpenSFS and look forward to expanding the open organization with our worldwide customer and partner network."
"Our NNSA national security mission requires high-end HPC resources. We deploy over 22 systems in two distinct world-class simulation environments for multiple programs at LLNL. The key integrating element in both of these simulation environments is the Lustre parallel file system. We welcome the formation of OpenSFS in order to provide the HPC Lustre community with a mechanism to share development and support resources," said Dr. Mark Seager, Livermore Computing Assistant Department Head for Advanced Technology.
"OpenSFS presents a unique opportunity for the broader Lustre community to actively contribute to the continued success of the world's most scalable open-source parallel file system," said Galen Shipman, group leader of technology integration at Oak Ridge National Laboratory. "The Oak Ridge Leadership Computing Facility has consistently achieved leading-edge performance and scalability with the Lustre file system through a highly collaborative development model. We are excited to partner with other leaders in HPC to bring such advancements to the broader community through OpenSFS."
OpenSFS will collaborate with two centers of excellence already established at ORNL and LLNL, and these centers will work with each other and with other collaborators of OpenSFS on day-to-day activities. OpenSFS will provide a collaborative environment in which requirements can be aggregated, distilled, and prioritized, and development activities can be coordinated and focused to meet the needs of the broader Lustre community. In addition to these activities, OpenSFS plans to hold an annual scalable file system workshop, as well as provide a variety of services (education and community outreach, testing, documentation and project management) to the community.
All interested parties are invited and encouraged to join OpenSFS at one of the three levels available. Visit www.OpenSFS.org for further details about participation and the OpenSFS organization.
Source: Cray Inc.; Data Direct Networks Inc.; Lawrence Livermore National Laboratory; Oak Ridge National Laboratory
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.