Visit additional Tabor Communication Publications
June 13, 2012
SUNNYVALE, Calif., June 13 -- Organizations across a wide range of industries are dealing with the realities that big data presents. Machine- and user-generated data growth is exploding in size and expected to increase at a rate of 40% year over year1. Because organizations must be able to quickly and efficiently turn this data into valuable information to drive smarter business decisions, they require a storage solution that delivers continuous data availability for optimal performance levels and simplified management.
To address these customer requirements, NetApp is now shipping Dynamic Disk Pools, a powerful new data protection technology that improves overall system performance and is available on big data solutions built on the NetApp E-Series platform. Additionally, NetApp today also announced that Whamcloud's Chroma Enterprise management software is now available with the NetApp® High-Performance Computing Solution for Lustre to help customers reduce management complexities.
New Technology Enables Continuous Availability, Improves Performance
Dynamic Disk Pools is an innovative new technology that administers dynamic reconstructions of failed drives, enabling customers to maintain data availability by restoring a system to optimal performance up to eight times faster than with traditional RAID architectures. Dynamic Disk Pools also decrease the performance impact of a failed drive by up to 60%. This allows customers to maintain the required performance levels during drive failures, which helps lower overall storage system costs and enables them to more quickly analyze data to speed time to results.
"Technologies like NetApp Dynamic Disk Pools would help the Oak Ridge Leadership Computing Facility deliver consistent, reliable performance even in the face of disk failures," said Dave Dillow, advanced systems architect for Oak Ridge National Laboratory, the largest science and energy national laboratory in the U.S. Department of Energy system. "At the scale and component counts of our computing systems, gracefully handling these failures is fundamental to our mission goals. Our early testing of NetApp's technology has been very positive, and we look forward to seeing it in the market."
Whamcloud Partnership Simplifies Lustre Management
To help customers simplify overall management of their Lustre environments and deal with the complexities related to deployment and management, NetApp has partnered with Whamcloud to incorporate its Lustre Software Manager powered by Chroma as part of the NetApp High-Performance Computing Solution for Lustre.
The new central management system provides customers with a unified view of their Lustre storage environment. In addition to simplifying overall management, it also enables customers to simplify several tasks that are oftentimes complex and time consuming such as installation, configuration, maintenance, monitoring, and fault diagnosis. This allows customers to maximize the performance of their storage environment and reduce the costs associated with managing high-performance computing storage.
"High-performance computing applications are only getting larger and more complex, and, as a result, customers are looking for ways to simplify their environments," said Brent Gorda, chief executive officer, Whamcloud. "Combining Chroma with the NetApp solution provides customers with a truly industry-leading storage solution optimized for Lustre environments and delivers unique management capabilities that offer new levels of simplicity. Now customers can spend less time managing their Lustre environment and more time analyzing data to achieve success."
The enhanced NetApp High-Performance Computing Solution for Lustre bolsters NetApp's portfolio of big data solutions on its E-Series platform. NetApp big data solutions are built on NetApp E-Series and Data ONTAP® platforms to efficiently process, analyze, manage, and access data at scale to meet customers' analytics, bandwidth, and content needs across a diverse number of industries, including oil and gas, financial services, media and entertainment, high-performance computing, and public sector. Customers can spark innovation, make better decisions, and drive successful business outcomes at the speed of today's business with NetApp big data solutions.
Separately today, NetApp and Hortonworks entered a strategic partnership to develop and pretest joint Apache Hadoop solutions using Hortonworks Data Platform to help customers simplify access to their data and gain deeper business insight to make the right decisions faster. NetApp also unveiled the NetApp Open Solution for Hadoop Rack to provide customers with a ready-to-deploy enterprise Hadoop solution. Learn more about how your business can take advantage of big data analytics with enterprise-ready Apache Hadoop solutions from NetApp and Hortonworks by reading the separate press release here.
Pricing and Availability
Dynamic Disk Pools technology is already shipping on all big data solutions based on E-Series, which are available from NetApp sales and channel partners. The enhanced NetApp High-Performance Computing Solution for Lustre is available today from NetApp sales and channel partners. Pricing is available from NetApp direct sales and channel partners.
1 McKinsey & Company, "Big data: The next frontier for innovation, competition, and productivity," May 2011.
NetApp creates innovative storage and data management solutions that accelerate business breakthroughs and deliver outstanding cost efficiency. Discover NetApp's passion for helping companies around the world go further, faster at www.netapp.com.
NetApp, the NetApp logo, Go further, faster, and Data ONTAP are trademarks or registered trademarks of NetApp, Inc. in the United States and/or other countries. All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as such.
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
Jun 13, 2013 |
Titan, the Cray XK7 at the Oak Ridge National Lab that debuted last fall as the fastest supercomputer in the world with 17.59 petaflops of sustained computing power, will rely on its previous LINPACK test for the upcoming edition of the Top 500 list.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.