Visit additional Tabor Communication Publications
September 06, 2011
Gregory Wong with analyst group Forward Insights says that “SSDs use non-volatile NAND flash memory chips, which are cheaper than DRAM chips but are still as much as 18 times more expensive than 15,000 rpm Fibre Channel or serial SCSI (SAS) drives.” He claims that the price of SSDs is expected to come down as more customers adopt solid state drive and NAND flash card technology, a fact that will trickle down to the data center.
The breaking point for SSD affordability will be the magical point in time when price reaches the $1 per gigabyte level, which some analysts expect will be late next year. As it stands now, the price for NAND flash in an SSD form factor is around $9 per gigabyte for a high-end, single-level cell flash and $3 per gigabyte for multi-level cell flash. To put this in perspective, Fibre Channel or SAS drives are in the 50 to 60 cents per gigabyte range.
As Lucan Mearian said, “When it comes to PCIe NAND flash cards, like those sold by Fusion-io, Texas Memory Systems, Micron or Virident Systems, which can be used in all-flash arrays or in application servers themselves, prices can go through the roof, but so does performance due to the higher speed interconnect and the proximity of the flash storage to the server processors.”
The problem is, some people don’t understand why the expense is there and furthermore, might not understand that this is not a magic bullet for dramatic performance improvements. In fact, some use cases offer examples where the investment in SSD technology is far from worth it.
From an example of this point in ComputerWorld
Dan Marbes, a systems engineer at the Green Bay, Wis.-based bank, decided to try solid-state drives (SSD) to increase the performance on I/O-hungry applications, while reducing his spindle footprint.
He bought three SSDs to serve as top-tier storage for business intelligence (BI) applications on his SAN. The flash storage outperformed 60 15,000-rpm Fibre Channel disk drives when it came to small-block reads.
However, when Marbes used the SSDs for large-block random reads and any writes, "the 60 15K spindles crushed the SSDs," he said, demonstrating that flash deployments should be strategically targeted at specific applications.
The idea here is that SSDs can easily cover their own costs, sometimes in less than a year, depending on use. However, since this is not a perfect one-size fits all solution for all, those who can’t clearly benefit from SSDs are the ones most likely to express concern about what seems like a ridiculous cost.
Full story at ComputerWorld
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.