Visit additional Tabor Communication Publications
December 21, 2007
CHICAGO, Dec. 18 -- Grid.org, the online community for open-source cluster and grid software, grew to 481 members and recorded more than 900 downloads of the free open-source Cluster Express beta software in its first month of availability.
Grid.org was launched Nov. 12, 2007 to provide a single aggregation point for information and interaction by the community of users, developers and administrators interested in a complete grid and cluster software stack. The site's primary open-source project, Cluster Express, provides comprehensive cluster scheduling and management by integrating proven, best-of-breed, open-source components into a seamless package that is easy to install and use.
"Response to the initial Grid.org launch and to our call for participation in the Cluster Express beta program has been gratifying," said Steve Tuecke, co-founder and chief technology officer at Univa UD, the Grid.org site sponsor. "Obviously we tapped into pent-up demand for a complete, integrated, open-source approach to cluster and grid computing."
Univa UD announced the initial beta program for Cluster Express last month as a way to let users positively impact and shape development of the software, expected to be generally available in early 2008. Today, the company announced availability of the second beta version of Cluster Express on Grid.org.
"We expect the release of the new beta version to drive more participation in the community, as more and more people begin to install and use the technology. With the excellent input we're getting from users, administrators and developers, there is no doubt we will be able to integrate exactly the components and features this market wants in subsequent releases," Tuecke said.
Grid.org is expanding to meet community requirements based on input from site visitors. Recently, the site added a Wiki that allows shared authoring of open-source grid and cluster content by the Grid.org community. Grid.org also plans to support code-sharing, allowing Cluster Express developers to contribute to the software and users to easily share enhancements and applications. This capability, along with access to the Cluster Express source repository and versioning control system, will be available to members in the first quarter of 2008. Other planned enhancements include an interactive map of cluster implementations worldwide, to visually display and provide metrics on the landscape of cluster users at a global level.
Grid.org is an online community for open source cluster and grid software users, administrators and developers. The mission of the site has evolved to one focused on providing a single location where open-source cluster and grid information can be aggregated so that people with a similar range of interests can easily exchange information, experiences, and ideas related to the complete open source cluster software stack. Established in 2001, Grid.org operated as a public interest Internet research grid for over 6 years and has now broadened the reach of the site to encourage use of open source technologies for grid computing at large.
About Univa UD
Univa UD is the leading provider of open-source products for grid and cluster computing environments. The company's industrial-strength offerings range from departmental and HPC cluster management to enterprise-wide grids, and represent the proven and cost-effective alternative to traditional proprietary products that customers have been waiting for. Based on a combination of open-source and proprietary components, Univa UD offerings include a downloadable open-source cluster management product, a proprietary cluster product with rich functionality, and a comprehensive enterprise grid product based on award-winning technology. All Univa UD products are run by Fortune 1000 companies in large-scale, production environments. Univa UD is headquartered in Lisle, Ill. with offices in Austin, Texas. For more information, contact us Univa UD at 1-800-370-5320 or visit us at www.univaud.com.
Source: Univa UD
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.