Visit additional Tabor Communication Publications
November 19, 2010
The industry's first terabyte memory drive achieves 21 GBytes/sec in small 4U chassis
CHICAGO, Nov. 19 -- SC10 -- Kove, a leading high performance storage vendor, has announced the world's first Terabyte Memory Disk in conjunction with Mellanox Technologies and R Systems. The Kove Xpress Disk (XPD), in combination with Mellanox's ConnectX-240Gb/s InfiniBand, has achieved more than 20 GigaBytes per second of sustained data bandwidth for random reads and writes.
These results were achieved using a single Kove XPD Gen2 storage appliance equipped with six Mellanox ConnectX-2 40Gb/s InfiniBand ports, serving SRP (SCSI RDMA Protocol) storage via an InfiniBand fabric to an 11-node cluster. The equipment used during the testing process was provided by R Systems.
The Kove XPD is the world's fastest storage device in a 4U chassis, providing continuous, sustained I/O for any duration of time. The record-setting performance will not degrade under load or over time, and allows organizations to drastically reduce I/O loads and remove storage bottlenecks.
"The next generation Kove Xpress Memory Disk is a continuation of our leadership in high performance storage. The XPD Gen2 provides uncompressed performance for any type of I/O with no loss of performance over time," states John Overton, Kove CEO.
"We are pleased to support Kove in building a world-leading InfiniBand-based storage solution," said Gilad Shainer, senior director, HPC and technical computing, at Mellanox Technologies. "By taking advantage of InfiniBand's inherent efficiencies and Mellanox's advanced offloading and RDMA technology, Kove is able to demonstrate storage throughput capabilities that can help eliminate I/O bottlenecks for next-generation HPC and enterprise datacenters."
Kove technology has an immediate and demonstrable advantage for most systems. "We employed Kove Xpress Disk to address an I/O bottleneck that caused poor response in metadata transactions," explains Ramon Williamson, storage systems engineer at Purdue University. "A response of over four hours was reduced to seven minutes using the Kove Xpress Disk, with similar performance seen from day one for all I/O requests to the system. I, and more importantly, my users, are thrilled with the amazing performance achieved. Outstanding!"
A live demonstration of the industry-leading Kove XPD Gen2 took place this week at the Supercomputing 2010 conference (SC10).
Formed in 2004, Kove (www.kove.com) is a pioneering leader in high performance storage. Kove provides patented, core technology components to solve the most challenging storage and data management needs.
Mellanox Technologies (www.mellanox.com) is a leading supplier of end-to-end connectivity solutions for servers and storage that optimize datacenter performance. Mellanox products deliver market-leading bandwidth, performance, scalability, power conservation and cost-effectiveness while converging multiple legacy network technologies into one future-proof solution. For the best in performance and scalability, Mellanox is the choice for Fortune 500 datacenters and the world's most powerful supercomputers. Founded in 1999, Mellanox Technologies is headquartered in Sunnyvale, Calif., and Yokneam, Israel.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.