Visit additional Tabor Communication Publications
December 03, 2010
Increased adoption among the upper echelon of the most demanding systems
CHATSWORTH, Calif., Dec. 3 -- DataDirect Networks (DDN), the leading data infrastructure provider for scalable, content-intensive enterprises, today announced that the company has significantly increased its adoption by the world's fastest supercomputers, as ranked by the November 2010 36th edition of the TOP500 list available at www.top500.org. As the list of record for the world's fastest HPC systems, the TOP500 list serves as the industry's leading barometer of trends and leadership across the clustered computing marketplace.
Having focused on the unique challenges of the HPC industry for over eight years, DDN's increasing adoption in many of the world's most data-intensive computing datacenters on this list validates key DDN product design decisions and strong customer focus. DDN's dominant share of storage systems for the TOP500 supercomputers increased across all geographies and all application areas. This resulted in an increase of DDN's share of storage in a number of categories, including:
"Supercomputing has become a top priority for governments, universities, and private corporations around the world," said Alex Bouzari, cofounder and CEO, DDN. "In every market and application we serve, DDN works very hard to establish our highly-differentiated solutions as the gold standard by which others are measured, and we are pleased to be the platform of choice for organizations solving the challenging and increasing demands of complex HPC centers."
DDN continues to increase its leadership in HPC storage by delivering products that are strongly differentiated by a focus on two key strategies. First is the Open Platform strategy that gives customers the option to choose among four world-class file systems and Hierarchical Storage Management (HSM) systems that meet the full range of HPC application requirements. Second is a focus on leadership in performance that accelerates HPC users' applications. This allows customers to achieve industry-leading performance in both streaming and random data access by choosing high performance technologies such as native InfiniBand and solid state disks to meet their demanding storage requirements. DDN HPC-optimized storage solutions provide diagonal scalability, allowing customers to increase capacity and/or performance as their application needs dictate. The Open Platform strategy and leadership performance are the key reasons why 3/4 of the top 20 supercomputers leverage DDN systems.
"DDN is an established vendor in the HPC market, with a set of solutions that are tailored for the demanding needs of HPC environments," said Earl Joseph, Ph.D., IDC program vice president for high performance computing. "Given the company's product focus and expertise in HPC, their growth in this area is not a surprise."
Recognition of DDN's storage leadership in HPC was reiterated at the recently concluded SC10 conference, where HPCWire, a leading industry publication, recognized the DDN Storage Fusion Architecture for its groundbreaking innovation. Combining industry-leading storage performance with intelligent virtualization, this advanced platform eliminates storage gateways and networking which have been traditionally required to build a clustered, parallel HPC storage environment -- while also lowering latency to increase application performance.
About DataDirect Networks
DataDirect Networks, Inc. is the data infrastructure provider for the most extreme, content-intensive environments in the world -- including the largest online gaming and music sites, social networking applications developers, photo and video sharing services, high performance computing environments, and seven of the 10 largest supercomputers in the world. Having sold hundreds of petabytes of storage systems worldwide, the company's storage technology delivers massive throughput, scalable capacity, consistency, efficiency and data integrity for today's extremely competitive and evolving markets. For more information, go to www.ddn.com or call +1-800-TERABYTE (837-2298).
Source: DataDirect Networks, Inc.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.