Visit additional Tabor Communication Publications
November 09, 2010
HAVANT, England, Nov. 9 -- Xyratex Ltd., a leading global provider of enterprise-class data storage subsystems and storage process technology to OEMs, today announced that Peter Braam and Peter Bojanic have joined Xyratex to help lead the company's Lustre initiative.
Xyratex has reunited the core architectural and development team of Lustre through the acquisition of ClusterStor and the hiring of Peter Bojanic. This ensures that Lustre continues to be the foundation of high performance computing (HPC) data storage. Peter Braam, working closely with the United States Department of Energy's Lustre community, invented the Lustre file system in the early 2000's at his original company, Cluster File Systems (CFS), which was acquired by Sun Microsystems in 2007. Most recently Peter was the CEO of ClusterStor, a startup he formed to further develop and support file system technology. Xyratex acquired the assets of ClusterStor earlier this year and has been continuing to add world class development talent in order to further enhance the capabilities of Lustre and support the Lustre community. Peter Bojanic joined Xyratex to lead the Lustre development and support team. Bojanic most recently led the Lustre development and support group at Oracle.
"Lustre is recognized as a leading high performance clustered file system in high performance computing with over 60 percent share of the Top 100 systems in the TOP500," said Earl Joseph, research analyst at IDC. "Peter Braam and Peter Bojanic are recognized as key leaders of the Lustre community and by reuniting them, there's no question that this is a very positive move for the broader HPC community and that it will help to ensure that Lustre will continue to be a key element of HPC data storage environments."
"I'm excited about joining Xyratex because it will enable me to continue the vision of Lustre," said Peter Braam, senior vice president of software at Xyratex. "Xyratex is committed to working with all the stakeholders in the Lustre open source community in order to continue the development and support of Lustre. With both the financial and technical resources available through Xyratex, we'll be able to ensure that Lustre continues to evolve in high performance computing."
"Peter Braam and his team have provided us with invaluable guidance and support," said Paul Calleja, director of high performance computing at Cambridge University. "Lustre is the cornerstone of our High Performance Centre here at Cambridge and we rely on it to support our demanding I/O requirements. We have made a significant investment in Lustre and are delighted to see companies such as Xyratex continue with both support and roadmap enhancements."
"Xyratex is an OEM focused company with the clear goal of enabling our OEM partners to leverage Lustre in their development of next generation high performance computing technologies," said Steve Barber, CEO of Xyratex. "We're very excited to have the ClusterStor team join Xyratex and are looking forward to working with our OEM partners and the broader Lustre community on enhancing the Lustre file system."
Xyratex (Nasdaq: XRTX) is a leading provider of enterprise-class data storage subsystems and storage process technology. The company designs and manufactures enabling technology that provides OEM and disk drive manufacturers with data storage products to support high-performance storage and data communication networks. Xyratex has over 25 years of experience in research and development relating to disk drives, storage systems, and high-speed communication protocols. Founded in 1994 in a management buy-out from IBM, and with headquarters in the UK, Xyratex has an established global base with R&D and operational facilities in Europe, the United States, and Southeast Asia.
Source: Xyratex Ltd.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.