Visit additional Tabor Communication Publications
March 06, 2013
SUNNYVALE, Calif., March 6 — Massive amounts of data are being created and processed by a variety of global organizations, including oil and gas and life science companies. Data is also being used in university research centers to analyze seismic data, predict severe storms, and power genomic and bioscience research that can save lives. These organizations are increasingly looking for technology solutions that provide high performance and bandwidth at an affordable cost.
To address these challenges, today NetApp announced the new NetApp E5500, designed to provide industry-leading performance, efficiency, and reliability for big data and high-performance computing (HPC). The NetApp E5500 is the latest addition to the NetApp E-Series platform, and builds the foundation for highly available high-capacity application workflows to provide industry-leading storage performance with half the footprint and half the operational costs over competitive systems.
Big data and HPC customers require an infrastructure that provides speed, scale, and cost efficiency. As a seventh-generation E-Series platform, the E5500 builds on the modular scalability and proven reliability of the previous generations and provides customers with a new level of performance, efficiency, and reliability. With a robust high-performance architecture, improved storage density, and additional support enhancements, the latest E5500 controller provides OEMs and organizations with a platform that overcomes the speed, scale, and reliability challenges posed by big data and HPC while delivering a better return on investment.
"HPC and big data customers need high performance to ingest and analyze huge amounts of data, while still managing power and cost efficiently," said Brendon Howe, vice president, Product and Solutions Marketing, NetApp. "High performance at a reasonable cost can be a difficult balance to strike; however, with over 500,000 E-Series systems deployed, NetApp's deep industry and storage experience created a strong foundation for the new E5500. The momentum of E-Series enabled us to build a new product that provides industry-leading bandwidth per dollar spent while improving density and reliability."
Industry-Leading Performance Confirmed
The SGI InfiniteStorage 5600, which is an OEM version of the NetApp E5500, has produced a new SPC-2 result confirming the performance and cost efficiency of the new E5500; it showcases the performance possibilities the E5500 unlocks for HPC and big data organizations. The audited, peer-reviewed SPC-2 result demonstrates the highest throughput per spindle by more than 2.5 times over the nearest non-NetApp published result. It also validates how the E5500 helps customers accelerate business results and reduce operational costs and footprint.
Improved Scale and Density Drive Best-in-Class Performance Efficiency
The new E5500 has a modular architecture that can be used with file systems, such as Lustre and Hadoop, to scale to unlimited performance efficiently. Combined with 4TB drive support, the E5500 provides the density and speed needed to accelerate time to results for HPC and big data customers. The result is a storage infrastructure that provides significantly increased density, more bandwidth, and best-in-class performance efficiency.
NetApp AutoSupport Improves Enterprise-Class Reliability
In addition to the E5500, the NetApp AutoSupport tool is now available for the E-Series product line, providing improved service and uptime to customers. NetApp AutoSupport informs NetApp's world-wide support organization of key metrics and system information. Benefits of NetApp AutoSupport include improved system health and uptime, enhanced storage and operational efficiency, and an overall improved support experience.
"The research carried out on the HPC systems of the Center for Information Services and High Performance Computing at the TU Dresden comprises numerous disciplines, each with their own storage requirements," said Prof. Dr. Wolfgang Nagel, director of the center. "Our new supercomputer, delivered by Bull, uses the new NetApp E5500 as the base for an excellent storage system that will allow our researchers to get their results faster. The enhanced reliability features and the performance analysis possibilities significantly increase our capabilities to support the users. We are already using NetApp FAS systems for central IT services at the TU Dresden and are happy that the AutoSupport feature has also been extended to the E-Series products."
NetApp creates innovative storage and data management solutions that deliver outstanding cost efficiency and accelerate business breakthroughs. Our commitment to living our core values and consistently being recognized as a great place to work around the world are fundamental to our long-term growth and success, as well as the success of our pathway partners and customers.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.