Visit additional Tabor Communication Publications
October 12, 2009
New fully integrated flash arrays increase Oracle and MySQL Database performance by up to a factor of 10, with up to 80 percent reduction in operating costs
SANTA CLARA, Calif., Oct. 12 -- Sun Microsystems Inc. today announced a significant leap forward in the industry with the introduction of new Sun Storage F5100 Flash Array that extends Sun's flash portfolio with the latest innovation that offers customers the best way to scale storage performance. The first enterprise server and storage company to bring fully-integrated Flash-based storage with Flash-optimized software to the enterprise, Sun's new flash array is designed to accelerate Oracle and MySQL database workloads and optimize storage architectures for higher performance at lower cost.
The Sun F5100 Flash Array features up to two terabytes of solid-state Flash capacity and an unprecedented 1.6 million read and 1.2 million write IOPS performance in a single rack unit (1.75 inches) -- yet consumes just 300 watts. This new high-performance, super-efficient storage array delivers 1.6 million IOPS of performance, which is comparable to 3,000 enterprise hard disk drives that span over 14 data center racks and consume more than ten times the energy (40,000 watts).
Sun has achieved world-record performance of 12.8 gigabyte-per-second of I/O bandwidth from one Sun F5100 array. Each Sun F5100 array is one rack-unit in height and can be zoned and connected to up to 16 separate hosts so that a single F5100 can be used by more than one application environment. Included unified management and monitoring software provides a single storage management window across a wide range of operating systems.
"Today's announcements build on Sun's strategy to lead a new storage hierarchy driven by flash technology to accelerate I/O throughput. No other vendor today is shipping fully-integrated flash-based hardware and software that leverages a world-class operating system to deliver breakthrough performance and value to our customers," said John Fowler, executive vice-president, Systems Group, Sun Microsystems.
Sun servers with FlashFire technology deliver world record performance across prominent enterprise and high-performance computing workloads
The Sun Storage F5100 Flash Array enabled the Sun SPARC Enterprise M4000 server to produce a world record result on the Oracle PeopleSoft Enterprise Payroll 9.0 N.A. application benchmark that represents typical online transaction processing workloads for processing employee payroll. The high-performance, high-density Sun Storage F5100 Flash Array dramatically improved I/O performance for this application with ten times better latency versus traditional fibre channel disks while, at the same time, work with Oracle Database 11g to process up to 250,000 employee payroll checks.
Additionally, the Sun Storage F5100 Flash Array worked with the Sun Fire X4270 server to deliver the best performance on a suite of Mechanical Computer-Aided Engineering (MCAE) application tests that included MSC/NASTRAN, Abaqus/Standard and ANSYS 12.0. The combination of Sun Flash storage and server technologies delivered a world record result on Abaqus/Standard and demonstrated between 65 percent and up to 2x improvement on various subsets of ANSYS 12.0 BMD and MSC/NASTRAN compared to the internal SAS disks configured with RAID0. These applications are based on the finite element method of analysis (FEA) and represent the more I/O intensive group of MCAE workloads making Sun Storage F5100 Flash Array a natural fit.
For more information on these leading benchmarks, visit http://sun.com/F5100. Look for additional benchmark announcements for Sun Storage F5100 Flash Array during Oracle Open World (Oct. 12-15, 2009).
"San Diego Supercomputer Center (SDSC) has been evaluating the F5100 Flash Storage array as a high performance SamQFS metadata target, which sits at the core of our archiving services and hosts well over one hundred million files. Performance improvement of 2.5 to four times was demonstrated for file creation and metadata scans, such as listing and backups. Further testing will be done using the Sun Storage F5100 as a Lustre metadata target, high speed storage pool in Lustre 2.0 for user checkpoint data, Oracle database storage device and out-of-core storage device on an HPC cluster," said Don Thorp, Production Systems, San Diego Supercomputer Center.
Flash performs best when integrated with software
Getting the best performance from these and other Flash devices is simplified through the use of Sun's ZFS Hybrid Storage Pools feature included in the Solaris Operating System (OS). The built-in automated tuning and extra resiliency features make it a popular choice for many customers.
"Oracle customers are accustomed to getting great value from their Oracle Database deployments and are looking for best-in-class products to optimize response time from their database applications," said Andy Mendelsohn, senior vice-president, Database Group, Oracle. "Oracle and Sun Fire systems running Solaris OS are proven to deliver world-class reliability, scalability and performance for enterprise customers and we look forward to extending this success into this new family of FlashFire-based storage products."
To learn more about Sun's Flash Storage solutions, see www.sun.com/flash.
About Sun Microsystems, Inc.
Sun Microsystems (NASDAQ:JAVA) develops the technologies that power the global marketplace. Guided by a singular vision -- "The Network Is The Computer" -- Sun drives network participation through shared innovation, community development and open source leadership. Sun can be found in more than 100 countries and on the Web at http://sun.com.
Source: Sun Microsystems, Inc.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.