Visit additional Tabor Communication Publications
November 08, 2012
SUNNYVALE, Calif., Nov. 8 — Today, NetApp announced enhancements to its NetApp E-Series platform, designed for high-performance applications and data-intensive workflows. The updated E-Series platform forms a foundation for highly available, high-capacity, performance-optimized application workflows in critical vertical markets, including healthcare, security, media and entertainment, and high-performance computing markets such as oil and gas, manufacturing, and government.
The new features, in the latest update to the NetApp SANtricity storage management software for E-Series, include the addition of SSD cache for improved performance, a broadening of network interface support for E5400 storage system for increased connectivity and network flexibility, and new mirroring and replication services for greater data protection. The enhanced platform provides NetApp and our OEM partners with a trusted foundation for building innovative storage systems that deliver superior performance for the most challenging big data workloads.
“NetApp’s E-Series platform is deployed in the world’s most demanding data-intensive environments,” said Brendon Howe, vice president, Product and Solutions Marketing, NetApp. “These dedicated workload environments require proven and versatile storage that delivers high-performance and scale without high cost. With features such as SSD cache support, petabyte-scale capacity, and improved data protection, our E-Series platform will help organizations accelerate innovation and speed insight to action.”
SSD Cache Improves Performance, Reduces Storage Costs, and Accelerates Innovation
Organizations dealing with large, complex datasets can accelerate their workflows with new SSD cache capabilities in the NetApp E-Series platform. SSD cache enables customers to automatically store blocks of “hot” data on solid-state drives for rapid access that improves application performance. For capacity or retention requirements, less often accessed data is stored on more cost-effective, larger-capacity HDDs. SSD cache enables intelligent storage tiering and provides faster access to critical application data and cost-effective density for longer retention periods. By providing streamlined access to the right storage medium for the right use at the right time, the E-Series platform enables OEMs to build performance-optimized, high-capacity, cost-effective solutions that accelerate innovation, analysis, and workflows for the most demanding customers.
Multiple Interface Support Enables Flexible IT Designs
Having a flexible infrastructure helps organizations adapt as infrastructure needs change. With the E-Series updates, customers and OEMs can now use new 10Gb/s iSCSI and 6Gb/s SAS network interfaces on NetApp E5400 systems, providing the broadest network interface selection available for dedicated workload solutions. These interfaces optimize the customer environment for connecting hosts and extending replication capabilities between E-Series systems.
Enhanced Data Protection Features Enable Efficient, Reliable Operations
Enhanced data protection features help E-Series and OEM customers deliver a dedicated, highly available, application-specific infrastructure. Updates include additional mirroring, copy service techniques, and dynamic disk pools to secure data. This security technology enables big data customers to maintain business operations and reduce storage systems costs. The E-Series redundant components, automated path failover, and online administration keep organizations productive.
“As the amount of information grows, customers look to Teradata to provide data management solutions that improve system performance with new features such as atomic write support,” said Ed White, general manager, Teradata Appliances. “The E-Series, with its flexible configuration, continues to be a highly reliable platform that supports Teradata’s high-performance industry-leading appliances.”
Today’s announcement, which supports dedicated infrastructures for application-specific workloads, follows NetApp’s release of the FAS3220 and FAS3250 earlier this week. The FAS3200 series is designed for shared, virtualized infrastructures that set the foundation for an Intelligent, Immortal, and Infinitely Scalable agile data infrastructure. SANtricity software for dedicated infrastructures and clustered Data ONTAP software for shared virtual infrastructures allow NetApp to support a broad range of storage demands, from dedicated workloads to virtualized data centers.
Evergreen Films, Inc.
“In media and entertainment, having a storage infrastructure that helps us to produce high-quality films faster than ever before, while reducing costs, is a substantial competitive advantage,” said Pat Devlin, postproduction supervisor, Evergreen Films, Inc. “That advantage is driven by storage technologies that not only have high bandwidth and capacity, but also those that offer the scalability and flexibility we need to adapt to new projects. The new intelligent SSD cache and network interface features in NetApp E-Series will improve our ability to produce films efficiently and reduce our risk of delays.”
“Seismic processing creates a really, data-intensive environment,” said Larry Fink, product manager, seismic processing, Halliburton, Landmark Software and Services. “In our work with Quantum and NetApp, the E-Series has already proven that it can provide a dedicated infrastructure for high performance with real-world seismic applications. The new E-Series enhancements in reliability and network interfaces will have a major impact on our ability to improve performance for seismic processing.”
NetApp creates innovative storage and data management solutions that deliver outstanding cost efficiency and accelerate business breakthroughs. Our commitment to living our core values and consistently being recognized as a great place to work around the world are fundamental to our long-term growth and success, as well as the success of our pathway partners and customers.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
he study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.