Visit additional Tabor Communication Publications
September 15, 2009
In an attempt to shake up the solid-state drive (SSD) business, startup Pliant Technology Inc. has launched the Enterprise Flash Drive (EFD) family of products. The company is touting its 2.5- and 3.5-inch "Lightning" EFDs as the fastest and most robust flash storage drives on the market.
Pliant is a private company based in Milpitas, Calif., and is backed by Lightspeed Venture Partners, Menlo Ventures, Arcturus Capita and Divergent Ventures. The company's management and development teams have their roots in the enterprise hard drive market, which has shaped the focus of their storage products toward sustained performance and reliability. Like all flash drive vendors, Pliant is aiming its product offerings at I/O-intensive applications in financial services, high performance technical computing and digital media, as well as general enterprise computing.
The idea is to be able to mix the high performance EFD devices with standard hard drives, using the flash devices for the "hot" data tier to drive performance, and the spinning disks as the secondary storage tier to provide capacity. With this approach, a storage design can take advantage of higher capacity, but slower disk drives, since most of the performance can be offloaded to the flash tier. Pliant says that for a typical 18 TB database application requiring 640,000 transactions/minute and 320,000-plus IOPS, you can cut CAPEX cost, dollars/IOPS, and dollars/GB in half, just by replacing 25 15K hard disks with 21 10K hard disks plus 4 of its EFDs. At the same time, power consumption can be reduced from 16 KW to 2 KW.
Pliant's initial offerings include the Lightning LB (150GB, 2.5-inch) and LS (300GB and 150GB, 3.5-inch) models. Both use Serial Attached SCSI (SAS) interfaces and are designed to slip into standard storage arrays and servers. The LB model delivers 120K IOPS, 420 MB/s of read performance and 220 MB/s of write performance. The LS numbers are 160K IOPS and 525 MB/s, and 340 MB/s, respectively. Compared to Intel's X25-E SSD at 35K peak IOPS, and 250 MB/s and 120 MB/s for reading and writing, that's quite an improvement -- not too surprising considering the X25-E is using the slower SATA interface. STEC offers a SAS-based SSD, named ZeusIOPS, and its numbers are somewhat better than Intel's at 80K IOPS, 350 MB/s for reads and 300 MB/s for writes.
But Pliant's big pitch is its real-world performance. In particular, the company is claiming that with a typical enterprise application read/write profile (between 90/10 and 60/40), the user can realize around 30K IOPS on a single port. That's nearly as good as the Intel flash device at peak read-only speed. Because of the nature of NAND memory, most SSD performance tails off precipitously as the proportion of write operations rises. One advantage the Pliant device has is its full duplex interface, so reads and writes can be serviced simultaneously.
Apparently the company has accomplished this without a write cache, as is present in most other enterprise SSD offerings. A write cache is used to overcome the poor write characteristics of NAND memory, but the Pliant engineers came to the conclusion that the cache algorithms weren't all that effective in real-world situations. Plus, since the cache is volatile, there is a risk of data corruption if power is interrupted.
Pliant is claiming its data reliability among the best in the industry, with a bit error rate of less than one sector in 10^17 bits. That's two orders of magnitude greater data reliability than standard enterprise SSDs and an order of magnitude better than a decent enterprise hard drive.
Also, unlike many SSDs, Pliant says the EFD guarantees unlimited writes over its five-year lifetime, and is able to maintain the same performance profile of that span. Because of the difficulty of managing the natural degradation of NAND memory, some flash vendors recommend capping write usage to no more than 5 GB per day, which limits the application profile significantly. (Forget about using it for a journaling, for example.) In that respect, Pliant's goal was to make the device just as flexible as a hard drive. Other reliability features include redundant ECC protected metadata, patrol reads, and support for the T10 data integrity field standard.
So how did they make the flash so smart? Pliant engineers overcame some of the deficiencies of NAND memory by developing their own custom ASIC controller and applying a unique software memory model. According to Greg Goelz, Pliant's vice president of marketing, the trick was to find the minimal amount of silicon required to support their performance and reliability goals. "As it turns out we needed an ASIC," he said. "The reality is we couldn't run these data reliability features or these performance characteristics on an FPGA."
Goelz claimed, because of the robustness of the ASIC, they can use NAND memory from virtually any supplier, even ones that are sub-par. In fact, top-of-the-line NAND is not necessarily the best choice here since high endurance varieties tend to dramatically reduce write performance, not to mention the extra cost. For the time being, Pliant has settled on Samsung SLC NAND, and is currently looking at a second source.
The Lightning EFD products are being delivered for OEM evaluation and qualification, and will be available via authorized channel partners sometime this month. Pliant has yet to disclose pricing.
The biggest challenge for the company will be to convince system OEMs, storage manufacturers, and integrators to incorporate its products. With companies like STEC and Intel getting a head start, it will be a battle to unseat the incumbents. The good news is that the flash market is probably in the knee of its growth curve, so there is likely plenty of room for multiple players, especially ones that can demonstrate some compelling product differentiation. If Pliant's claims for its new EFD product line hold true, they'll have plenty of that.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.