Visit additional Tabor Communication Publications
June 21, 2011
HAMBURG, June 21, 2011 -- ISC'11 – LSI Corporation (NYSE: LSI) today announced that the National Center for Supercomputing Applications (NCSA) has deployed a system utilizing LSI 6Gb/s SAS RAID controllers with LSI CacheCade™ cache tiering software to explore some of the unprecedented data storage challenges that will come with the Dark Energy Survey. The Dark Energy Survey is a large-scale, multinational effort to study the acceleration of the expanding universe. Involving more than 120 scientists from 23 institutions, it will be one of the most data-intensive astronomical research projects ever conceived.
LSI reseller partner International Computer Concepts built the system for NCSA around the LSI MegaRAID® SAS 9260-8i low-profile MD2 eight-port controller and innovative LSI CacheCade software for I/O performance acceleration. The system is designed to deliver the performance, throughput and scalability required to store and process approximately 200TB of raw image data in a database that is expected to grow by 400GB daily over the one and a half year life of the project.
"When you consider that the Dark Energy Survey will examine more than 300 million galaxies and a 5,000 square degree surface area, it's no surprise that this will be one of the most rigorous data projects ever undertaken by NCSA," said Bernie Acs, informatics system designer and database architect at NCSA. "We may never know the exact size of the universe or why it appears to be expanding at an increasing rate, but thanks in part to the 9260-8i card and the CacheCade software from LSI, we have the tools to get us closer to the answer very soon."
LSI CacheCade software enables solid-state drives (SSDs) to act as a secondary tier of high-performance controller cache in front of hard drives, accelerating application and workload performance. The technology has helped to accelerate the performance of NCSA's hard disk drive (HDD) arrays by enabling three 160GB SSDs to be configured as an additional high-performance read cache resource available to the controller. The solution has enabled NCSA to reduce the time required to create database indices from four and six hours to only 15 minutes, an approximately 20x performance improvement. NCSA expects the performance benefits to grow linearly from adding up to three more 9260-8i cards with CacheCade software per system.
"The idea that there are areas of physics still yet to be discovered is quite extraordinary, and LSI is proud that our storage technologies are playing a key role in research of such galactic importance," said Brent Blanchard, director, worldwide channel sales and marketing, LSI. "From climate modeling to genome sequencing, LSI storage technologies are helping scientific researchers cost-effectively tackle some of the world's most diverse and demanding data storage challenges."
Additional information about the NCSA solution is available at
LSI will be demonstrating MegaRAID cards with CacheCade software at the 2011 International Supercomputing Conference (Stand #341) taking place this week in Hamburg, Germany.
LSI storage products, including its complete family of SATA+SAS RAID controllers and host bus adapters (HBAs), 6Gb/s SAS switch, advanced software options and the WarpDrive™ SLP-300 acceleration card are available through a worldwide network of distributors, system integrators and VARs. Additional information is available at www.lsi.com/channel.
LSI Corporation (NYSE: LSI) is a leading provider of innovative silicon and software technologies that enable products which seamlessly bring people, information and digital content together. The company offers a broad portfolio of capabilities and services including custom and standard product ICs, adapters and software that are trusted by the world's best known brands to power leading solutions in the Storage and Networking markets. More information is available at www.lsi.com.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.