Visit additional Tabor Communication Publications
June 10, 2008
Today NetApp has announced new technologies in the middle of its performance roadmap that it says are aimed at the scale-out storage needs of engineering and HPC environments. The company's new kit combines a tighter footprint -- saving on infrastructure costs -- with software and appliances that accelerate the process of getting the data from disk to user.
Known as Network Appliance until just a few months ago, NetApp is a nearly $3 billion a year company focusing on network attached storage systems. The company's products serve the archiving and content delivery needs of medium- and large-sized enterprises, including Yahoo! and Deutsche Telekom.
With this announcement NetApp is introducing solid advances in the mid-range of its hardware lineup following recent refreshes at the ends of the performance spectrum. The FAS2000 series covers storage needs at the low end up to the 100 terabyte range, while the FAS6080 enables configurations just over 1 petabyte. The FAS3140 and FAS3170 in today's announcement can be configured with up to 420 terabytes and 840 terabytes respectively and round out the middle of the company's storage lineup. These systems are complemented by the V3140 and V3170 systems, a modification that allows NetApp's hardware to be integrated with storage solutions from many of the company's competitors.
The FAS3140 and 3170 products are scale out storage products that aim to provide faster throughput with multiple points of access for stored data. As such, the filesystem is not ideal for all workloads by itself, but will be well-suited for those with independent data. In technical computing, and in HPC in particular, the company will face stiff competition against established market positions held by Panasas, SGI and BlueArc.
NetApp's Storage Acceleration Appliance, also announced today, addresses another key use case: that of multiple readers on a single data set, as you might find in applications from genome search, financial services, and image processing. The appliance automatically caches copies of the data set to maintain maximum bandwidth to multiple independent readers. The cache holding the replicated datasets could be solid state or disk, and the solution offers centralized administration (there is still only one master copy of the
data) with the benefits of distributed access.
The last piece of hardware in today's announcement is the Performance Acceleration Module, an add-on card to improve performance for workloads that are dominated by random read access (such as file serving). Up to 5 modules snap into PCI Express slots in the company's existing storage servers and provide an "intelligent" read cache. NetApp's software offers analysis tools that can predict whether your workload would benefit from installing the module before you make the investment.
Also included in today's announcement is a Remote Support Agent that monitors the health of your installation and proactively opens tickets on your behalf with NetApp to head off problems before they become downtime or lost data.
According to Brendon Howe, vice president and general manager of the NAS & V-Series business units, today's announcement is strategic for NetApp, "We have focused a lot of effort on the enterprise side of our storage offering lately, and now we're moving to aggressively market the new technologies we've been developing for the technical side of the computing market." But with established vendors already having strong beach heads in this market, and HP, Sun, IBM, and others taking aim at a larger slice of the pie, it remains to be seen if NetApp can find its niche in the HPC market.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.