Visit additional Tabor Communication Publications
November 17, 2009
TOP500 list shows 39 out of Top 50 supercomputers choose Sun storage
PORTLAND, Ore., Nov. 17 -- Sun Microsystems, Inc. today announced new products and technologies that extend its HPC leadership, maximize application performance and throughput, and provide superior building blocks for HPC systems. In addition, Sun is announcing new HPC customers, world-record performance and TOP500 List results that demonstrate its relentless system innovation. Sun doubled its number of entries since the June 2009 list with a total of 11 deployments providing nearly 2 PetaFLOPS (PFLOPS). For more information on Sun's HPC solutions, visit http://www.sun.com/hpc.
"Sun servers, storage and networking continue to fuel world record HPC performance and provide the building blocks for dozens of new Sun Constellation System deployments around the globe," said John Fowler, executive vice president of Systems Group at Sun Microsystems. "Corporations and scientists alike are using Sun server and storage innovation to gain competitive advantage and tackle the world's most complex problems."
Sun at Supercomputing 2009
Sun is featuring a range of its own HPC technologies at its booth (#435), including servers, unified storage, flash, networking and software, as well as third-party solutions like UniCluster by Univa, ideal for HPC applications. For more information on the innovative HPC technologies Sun is showcasing at Supercomputing 2009, visit http://www.sun.com/hpc or the Sun booth (#435) for live demonstrations. Sun's Supercomputing 2009 online press kit can be found at http://www.sun.com/aboutsun/media/presskits/2009-1117/.
New products and solutions announced today include:
Sun Storage 7410 Delivers Outstanding Performance with Increased Efficiency and Capacity
Sun has doubled the performance of the Sun Storage 7410 Unified Storage system by upgrading up to four six-core AMD Opteron CPU processors and adding new 2 TB drives. With more processing cores, twice the DRAM cache -- up to 512 gigabytes (GB) and double the storage capacity -- 576 TB/s, the Sun Storage 7410 Unified Storage system delivers increased performance and system bandwidth. With Sun's innovative flash technologies such as Sun Storage F5100 Flash Array and Solid State disk (SSDs) recent benchmark results have demonstrated performance increases up to 107 percent running common MCAE applications such as MSC-Nastran and Ansys.
Sun Doubles Presence on TOP500 List
Sun technologies are powering some of the largest HPC systems in the world, with nearly 2 PetaFLOPS of performance represented on the latest Top500 list released today. Sun doubled its overall presence on the list, including CLUMEQ (Canada), ETH (Federal Institute of Technology Zurich), Korea Institute of Science and Technology Information (KISTI), Sandia National Laboratories and University of Zurich.
Sun is also announcing new HPC customers today, including:
National Cheng Kung University (NCKU)
National Cheng Kung University (NCKU) is deploying Sun HPC systems to meet its increasing computing capacity needs for scientific research and study. Its systems are based on a variety of Sun hardware and software platforms, including Sun StorageTek 6140 arrays, Sun Fire X2200 M2 servers and X4200 M2 servers, a Sun SPARC Enterprise M9000 server, Sun Fire X4500 data servers, the Lustre file system and Solaris 10 Operating System (OS).
The Consortium Laval, Universite du Quebec, McGill and Eastern Quebec (CLUMEQ) selected Sun's Constellation System to build a world-class high performance computing and energy efficient datacenter providing over 77 TeraFLOPS. Sun technology includes 10 Sun Blade 6048 modular systems with Sun Blade X6275 server modules, a Sun Lustre Storage System based on the high performance Sun Storage J4400 array with 1 petabyte (PB) of storage capacity, as well as HPC-related software. CLUMEQ utilizes the Sun Constellation System to conduct complex research studies ranging from climate and ecosystem modeling, high energy particle physics, cosmology and data mining.
German High Performance Computing Centre for Climate and Earth System Research
Sun has supplied Europe's largest storage installation with a Sun Constellation System based on a Sun StorageTek SL8500 Modular Library System, to the German High Performance Computing Centre for Climate and Earth System Research (Deutsches Klimarechenzentrum, DKRZ), one of the centers performing simulations for the Intergovernmental Panel on Climate Change (IPCC) Assessment Report. The DKRZ also utilizes a range of Sun software including Lustre, Sun Storage Archive Manager (SAM), Sun QFS and Sun Grid Engine (SGE). The DKRZ provides top-end computing and storage capacity for complex simulation models. The new system provides a total of 65,000 media slots enabling over 65 PB to be stored on T10000B magnetic tape cassettes.
Sun Systems Shine with New Records on HPC Benchmarks
Ranging from desktop to some of the world's largest installed systems, Sun Constellation Systems deliver outstanding performance results while demonstrating near-linear scalability and efficiency on various HPC workloads. Today, Sun announced the following benchmark results:
NAMD: The Sun Blade 6048 chassis with 48 Sun Fire X6275 blade servers (768 cores) and QDR InfiniBand, delivered the best published result on the molecular modeling NAMD benchmark, with up to 95 percent better performance than double data rate (DDR) IB and a scalability efficiency of nearly 80 percent.
FLUENT and RADIOSS: Sixteen Sun Fire X6275 blade servers (256 cores) outpaced a competing SGI Altix ICE system with the same number of cores, on one of the most popular MCAE applications -- FLUENT. FLUENT software solves fluid flow problems and is based on a numerical technique called computational fluid dynamics (CFD) which is heavily used in the automotive, aerospace and consumer products industries. In addition, Sun's cluster beat the SGI Altix ICE system using the prominent MCAE 'crash' code, RADIOSS from Altair, by over 40 percent.
Reverse Time Migration (RTM): The Sun Blade 6048 chassis with 12 Sun Fire X6275 blade servers, interconnected via integrated InfiniBand QDR Host Channel Adapters (HCA) and Quad Data Rate Switched Network Express Modules (QNEM) and using Lustre file system delivered up to 20x performance improvement over traditional Gigabit Ethernet/Network File System configurations. RTM is the most popular seismic processing algorithm often used in geophysical studies to produce quality images of complex substructures. Sun's Constellation System offers a unique platform for customers looking to reduce their seismic processing time by a factor of two.
SPECviewperf 10 and SPECfp2006: The Sun Ultra27 workstation delivers best performance in its class on SPECviewperf 10 3D graphics rendering benchmark. Running OpenGL on Windows Vista OS, Sun's workstation surpassed similar HP and Dell products on six out of eight tests. The Sun workstation, with OpenSolaris and Sun Studio software, continues to hold a single chip world record on the SPECfp2006 benchmark, making it an ideal platform for floating point intensive applications used by CAD/CAM designers and MCAE engineers alike.
To see all HPC benchmark results on Sun's Open Network Systems, visit http://www.sun.com/hpc/benchmarks.
About Sun Microsystems, Inc.
Sun Microsystems (NASDAQ:JAVA) develops the technologies that power the global marketplace. Guided by a singular vision -- "The Network Is The Computer" -- Sun drives network participation through shared innovation, community development and open source leadership. Sun can be found in more than 100 countries and on the Web at http://sun.com.
Source: Sun Microsystems, Inc.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.