Visit additional Tabor Communication Publications
November 17, 2010
Downloadable kit simplifies the deployment of Lustre Storage clients across compute nodes
NEW ORLEANS, Nov. 15 -- Terascala, Inc., the leading provider of high throughput, high capacity parallel storage appliances, today announced the availability of a new storage client for Platform Computing's Platform Cluster Manager. Organizations can now easily use Platform Cluster Manager to speed and simplify the deployment of Lustre storage clients across multiple compute nodes.
Terascala customers seeking to leverage Platform's HPC cluster management software can obtain the storage client by downloading a kit from Terascala's website.
"Deployment has always been a huge issue for HPC environments, especially in the storage arena. Until now, organizations seeking to deploy new storage initiatives have been slowed due to the inherent complexity of the process," said Rick Friedman, vice president of marketing and product management at Terascala. "Now, our customers can take advantage of Platform Cluster Manager to quickly roll out new storage initiatives and gain the high levels of management, performance and uptime that Platform offers. Platform is a leader in the management space and it makes perfect sense to partner with them as we continue to deliver our parallel storage appliances to the marketplace."
Based on the open source Project Kusu, Platform Cluster Manager is a pre-integrated, vendor certified solution that enables the consistent delivery of scaled-out application clusters. Built using open source components, it includes all the tools required to deploy, run and manage clusters with ease.
"With data stores continuing to grow exponentially within most organizations to provide quick data access and greater processing performance, there is an absolute need in the marketplace for a solution that can combine high capacity storage with cluster management," said Ken Hertzler, vice president of product management, Platform Computing. "Our partnership with Terascala will make the deployment of large HPC storage solutions easier for customers to manage, allowing them to save time and get more out of the systems they have in place."
Terascala's High Throughput, High Capacity Storage Appliances
Terascala's appliances deliver the best price / performance high throughput storage solutions for HPC. Terascala's product offering ranges from smaller, single cluster low capacity scratch storage up to multiple petabyte highly available solutions. All of Terascala's appliances deliver the performance expected in a parallel storage solution in an easy to deploy and easy to manage appliance so that users can get the most out of their environments with little disruption.
Terascala develops high throughput, high capacity and cost-effective storage solutions. Its unique storage appliance approach is changing the dynamics of the performance driven computing market, enabling existing users to do more for less while enabling new users to maximize the capabilities of their processing infrastructure. Founded in 2005, Terascala is based in Avon, Mass. Learn more at www.terascala.com.
Source: Terascala, Inc.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.