Visit additional Tabor Communication Publications
October 04, 2012
CHATSWORTH, Calif., Oct. 4 — DataDirect Networks (DDN), the leader in massively scalable storage, today announced a significant increase in the adoption of its award-winning High Performance Computing (HPC) and Big Data storage solutions by the world’s fastest supercomputers, as ranked by the June 2012 edition of the TOP500 list, available at www.top500.org.
With a decade-long dedication to solving the world’s largest scalability challenges in HPC and Big Data environments, DDN’s award-winning storage appliances are now deployed in more than 150 of the world’s Top500 computer systems and deliver more total storage bandwidth than all other vendors combined. In addition, DDN has deployed to more Top100 systems on the June 2012 Top500 list than any other vendor.
“We are very pleased to help our customers solve their business challenges better, faster, more cost effectively and more reliably than ever before in academia, supercomputing centers, life sciences environments and all the places where Big Data exists,” said Alex Bouzari, CEO and cofounder, DDN. “As more and more organizations and government agencies have to capture, store, process, access, and collaborate on massive amounts of data, DDN storage appliances are increasingly being adopted as the solution of choice.”
The current Top500 list includes DDN customers from every continent, reflecting the worldwide adoption of the company’s award-winning SFA technology. DDN storage solutions are relied upon by more than 60% of the top 100 systems worldwide, including Argonne National Laboratory and Oak Ridge National Laboratory in the United States, Leibniz Rechenzentrum in Germany, CEA in France, and IFERC in Japan.
“DDN is one of the most unique companies in the storage industry,” said David Vellante, Chief Research Officer at Wikibon.org. “DDN has been engineering solutions to large data problems for over a decade, well before the term Big Data was conceived. It is a non-conventional technology company and has the potential to disrupt traditional thinking as it helps usher in the modern Big Data era. DDN’s recognition as a major player in Big Data is impressive and we would expect the company to accelerate its momentum in this space as the demand to ingest, process, store and distribute massive data sets escalates.”
About DataDirect Networks
DataDirect Networks (DDN) is the world leader in massively scalable storage. We are the leading provider of data storage and processing solutions and professional services that enable content-rich and high growth IT environments to achieve the highest levels of systems scalability, efficiency and simplicity. DDN enables enterprises to extract value and deliver results from their information. Our customers include the world's leading online content and social networking providers, high performance cloud and grid computing, life sciences, media production organizations and security & intelligence organizations.
Deployed in thousands of mission critical environments worldwide, DDN's solutions have been designed, engineered and proven in the world's most scalable data centers to ensure competitive business advantage for today's information powered enterprise.
Source: DataDirect Networks
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.