Visit additional Tabor Communication Publications
June 23, 2009
SFA10000 accelerates applications with over 1 million IOPS and 10GB/s, excels in transactional and unstructured data environments and delivers company's award-winning scalability and data protection
HAMBURG, Germany, June 23 -- DataDirect Networks (DDN), having delivered more aggregate throughput to the top 500 fastest supercomputers in the world than all other IT vendors combined, announced today its Storage Fusion Architecture (SFA) storage platform, which delivers ease of use, file system and application simplification, unparalleled performance, scalability, and power savings to today's demanding IT environments.
"With the staggering growth of unstructured data in the enterprise, and the increasing complexity of managing network infrastructure requirements, our customers have been clamoring for a unified, easy to manage, file and block computing and storage backbone," said Alex Bouzari, CEO and co-founder of DataDirect Networks. "For the past three years we have been developing and testing such functionality -- easy to deploy, capable of simultaneously simplifying the deployment of file systems and applications, scaling to extreme performance levels, and delivering significant power savings and rock solid reliability. That is the Storage Fusion Architecture, a platform designed for today and tomorrow's environments that eliminates the compromises users previously made with antiquated storage and server platforms."
The evolution of processor technology has ushered in multicore computers which are now commonly used to increase compute capacity and application performance. In massively threaded cluster and grid environments -- even large file throughput operations result in random transactional I/O to scalable storage systems, requiring transactional throughput rather than simply maximum streaming capability.
Additionally, storage systems and servers must now be able to respond to heavily threaded I/O patterns, by delivering transaction-optimized extreme storage bandwidth as well as high throughput. SFA brings the fusion of unsurpassed levels of bandwidth and IOPS, with file and application hosting services, all in one easy to deploy appliance, providing the ideal cloud computing backbone for today's demanding unstructured and transaction-based environments.
"The era of CPU core speed performance improvements is effectively winding down in favor of multicore performance scaling," said Steve Conway, research vice president, high performance computing group, IDC. "With four core processors being commonly deployed today and six and eight core processors right around the corner, storage requirements are changing and I/O is increasingly becoming randomized. DDN has designed the Storage Fusion Architecture to quickly respond to advancements in processing and memory technology as HPC enters the era of petascale environments."
The SFA10000 delivers an unrivaled 10 GB/sec of read and write performance and over one million IOPS, with the ability to manage up to 2.4 petabytes of storage in just two datacenter floor tiles. Other features include file and application hosting, an active/active redundant design, mirrored cache, support for multiple RAID levels and SSD, SAS and SATA disk drives, intelligent block striping, SATAssure data protection, power saving modes, and 8Gb Fibre Channel and 40 Gb InfiniBand host-port options.
"We gravitate to convenient, but constrained, descriptors in this industry: users are either 'enterprise' or 'mid-sized,' perhaps 'commercial' or 'rich media' focused, and have structured block storage or unstructured file storage," said Mark Peters, analyst, Enterprise Strategy Group. "The real world is far more convoluted. Thus, unstructured data is flooding the traditional 'commercial' space as organizations collaborate and communicate with rich media applications -- explaining why a growing number of 'enterprise' users are implementing DataDirect Networks, whose scalable, high-performing, dense systems are ideally suited for such applications. In the mashed-up real world, the programmable Storage Fusion Architecture from DDN is not just about extending performance and density for its traditional users, but is laying an intriguing foundation for a rapid addition of features which should extend its attractiveness to even more 'enterprise' users."
The SFA10000 is currently in customer trials and will be generally available in September of this year.
About DataDirect Networks
DataDirect Networks, Inc. (DDN), is the data infrastructure provider for the most extreme, content-intensive environments in the world -- including the largest online gaming and music sites, social networking applications developers, photo and video sharing services, high performance computing environments, and more than 400 broadcast and post-production facilities around the globe. With more than 215 petabytes installed worldwide, the company's technology delivers massive throughput and IOPS, scalable capacity, consistency, efficiency and data integrity for today's extremely competitive and evolving markets. Founded in 1998, DataDirect Networks serves customers through its global partnerships with Dell, IBM, Sony and other industry leaders; and through its offices in Europe, India, Asia Pacific, Japan and throughout the U.S. For more information, go to www.ddn.com or call +1-800-TERABYTE (837-2298).
Source: DataDirect Networks, Inc.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.