Flash Dancing: Live from the Flash Memory Summit in Santa Clara
Today I had the opportunity of spend time at the Flash Memory Summit in Santa Clara. This is billed as the only conference focused entirely on Flash memory and its applications. This experience got me thinking again about an article I wrote few weeks ago entitled – Back to the future: Solid-State Storage in Cloud Computing. Having spent time listening to sessions and walking the floor it seemed appropriate to follow up on that article in the context of the event and a few things I noticed.
Though the show is very active and well attended I feel there seems to be something missing and that is real end users for this technology. There is an abundance of vendors present checking out their competition and listening to very interesting and technically oriented presentations. I have not yet seen many real world users presenting their real world workloads before and after migration to an SSD environment. Maybe that’s tomorrow, we’ll see.
Nevertheless, the market for SSD is poised to expand as noted in press release this morning from Objective Analysis. The headline being — 40 million SSDs shipped for $7 billion in revenues in 2015.
There is no question SSDs are finding acceptance, regardless of their packaging or host interfacing approach. A few key vertical markets or applications are experiencing not only the promise of IO performance improvement but they are also seeing tangible benefits in lowering the power envelop compared to HDD and consolidation of infrastructure, both yielding CAPex and OPex savings.
Adoption will come but I think a few things still need to happen before this technology, in the enterprise and HPC user base, takes off and becomes ubiquitous.
Standards – there are no standards in this technology yet. There is a lot of discussion. The industry needs standards so that a used with brand x can also bring in brand y and have it all seamlessly operate, no special drivers, no hand holding and no genuflecting to make it work.
Real-world user information – there is simply not enough of this. Yes, there are occasional articles, but not enough. The data center user is a conservative buyer and does not want to be first; they want to know who in their industry has deployed in the datacenter for production work. Not an evaluation or proof of concept, a real deployment.
Reliability – are there too many early SSD unit failures, probably though no one likely to admit this. On reliability, you also get into the debate of MLC vs. SLC. MLC is lower cost and lower performance than SLC but getting better, it simply is not prime time mission critical data center ready. No question SLC is more reliable today, but at a price. This, leads to the next adoption criteria.
Price – many prospective users compare the $/GB of flash SSDs with $GB of HDD technology, no surprise here that HDD is cheaper to buy. Users have to look beyond pure price and compare life time costs per $GB, power savings and other factors that will clearly show that a 3 or 5 year cost of ownership pendulum swings toward SSD technology.
Killer app – is there a clearly defined killer app? Not yet. SSD storage solutions can have a dramatic impact on performance and cost saving in applications ranging from Web 2.0, HPC number crunching, data analysis, but no clear killer app. No question a clear-cut killer app would be beneficial. This would be supported by user model, clear-cut ROI analysis, benchmarking data, system workload (not single app). When you have this you get away from specmanship and market true business value.
Let’s see what tomorrow brings.