Visit additional Tabor Communication Publications
February 12, 2013
SANTA CLARA, Calif., Feb. 12 – Intel Corporation today announced its Intel Cache Acceleration Software (CAS) for Linux that prioritizes application performance, providing solid-state drive (SSD) levels of speed without migration costs, and built-in data integrity for Intel’s SSD data center family of products.
The Linux version of the software will be generally available within 30 days as an enterprise subscription and open source release. It will complement the existing Intel CAS 2.0 for Windows that is available today.
Intel CAS 2.0 is based on technology from Intel’s August 2012 acquisition of NEVEX Virtual Technologies, and provides key capabilities that enhance the company’s SSD data center family of products. Intel’s CAS 2.0 solution provides significant input/output (I/O) and application performance improvements in use cases including database/OLTP, virtualization, cloud and big data (Hadoop).
“With the combination of Intel CAS and the Intel 910 PCIe SSD, Intel has a solution that is a game changer for server-side caching,” said James Bagley, senior analyst at Storage Strategies-NOW. “It allows IT to easily add more performance to their existing storage infrastructure, delivering SSD levels of speed without a complete data migration.”
“Intel CAS complements our SSD data center family by providing a total caching solution that delivers even more value and capability for our customers,” said Chuck Brown, product line manager for Intel’s Non-Volatile Memory Solutions Group. “Intel CAS delivers a multi-level cache across the SSD and DRAM for optimal performance. Compared to short-stroked hard-drive technology, we’ve seen up to 50 times the improvement in I/O performance throughput for read intensive workloads by adding Intel CAS with the Intel SSD 910 series.”
“The idea of improving application performance without breaking the bank is one that has obvious appeal in today’s increasingly complex and demanding IT environments,” said Mark Peters, senior analyst at Enterprise Strategy Group. “Intel’s CAS delivers on this aspiration in two ways: First, it is based upon Intel’s high-performance and highly reliable solid-state solutions, which ensures easy integration and data integrity; and second, it is differentiated from ‘vanilla’ solid-state solutions in its use of multi-level caching across DRAM and SSD, which helps to simultaneously drive overall performance above, and overall costs below, that of less sophisticated SSD-only solutions.”
Provides Hassle-Free Application Acceleration and Performance
The Intel CAS solution provides significantly improved performance for I/O-intensive applications running on dedicated servers or virtual machines (VMs). With its unique policy-based caching, Intel CAS can target performance to specific applications, files, VM or individual database tables. Selective optimized caching allows administrators to focus performance on applications and data that directly impact the business while enabling consistent I/O acceleration by avoiding contention with other applications and server actions. A scalable application server tier with direct-attached SSD provides low latency and consistent performance, complementing SAN technologies for capacity storage tiers such as archive, backup, recovery and more. By balancing CPU and I/O performance, Intel CAS offers companies lower TCO with an optimized, integrated solution.
“Attempts to solve I/O performance issues at the SAN tier have resulted in wasted spend, wasted productivity, and frustration to both users and IT administrators,” said Steve Dalton, General Manager of Attached Platform Storage Software at Intel. “We have the seamless answer with our Cache Acceleration Software solution. Intel CAS allows administrators to target storage performance to those applications that need it the most, offloading IOPS from primary storage to the servers themselves, providing a win-win with previous storage spend.”
Intel CAS 2.0 for Windows and Linux supports the Intel SSD data center family including the Intel DC S3700 and 910 Series. The open source code release will be available on intel.com.
Intel is a world leader in computing innovation. The company designs and builds the essential technologies that serve as the foundation for the world’s computing devices.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.