Visit additional Tabor Communication Publications
November 14, 2011
First Interlagos deployment in operation at University of Delaware
FREMONT, Calif., Nov. 14 -- Penguin Computing, an elite partner of the AMD Fusion Partner Program, today announced the immediate availability of AMD Opteron 6200 and 4200 Series processors AMD on its refreshed Altus server line and an early HPC cluster deployment powered by the AMD Opteron 6200 Series processor at the University of Delaware.
The cluster deployed at University of Delaware comprises 200 compute servers, interconnected through a QDR InfiniBand fabric. The system delivers a theoretical peak performance of 49.3 TFLOPs and has an aggregate memory capacity of 13.5TB. The Altus 1800i and the high density Altus 1804 are the compute platform for this deployment. The 1800i is a new version of Penguin's Altus 1800 that has been updated to support maximum HyperTransport bus bandwidth to enable optimal performance for the new AMD Opteron 6200 and 4200 Series processors.
Featuring up to 16 cores per processor, this new generation of AMD CPUs delivers great performance for multi-threaded HPC applications, as well as virtualization environments. With integrated quad channel DDR3 memory controllers and native support for DDR3 memory with clock speeds up to 1866MHz, the processor's memory performance supports its high core density. HPC users who are typically concerned with floating point performance, will also greatly benefit from AMD Opteron 6200 Series processor instruction set extensions such as the Advanced Vector Extensions (AVX). With socket compatibility maintained by AMD, systems based on the previous processor generation can be seamlessly upgraded.
"When having to make a decision on the server platform for our upcoming cluster deployment we looked at a number of solution providers and server platforms. With high core density, overall performance characteristics and processor availability on top of our list of criteria, the AMD Opteron 6200 Series processor was a natural choice," says Daniel J. Grim, Chief Technology Officer, Information Technologies at the University of Delaware. "Given the success we have had with Penguin in the past and Penguin's close relationship with AMD, it made the most sense to select Penguin for this deployment as well."
"IT departments are increasingly under pressure to do more with less. A server platform that combines the latest AMD performance and efficiency-enhancing technologies with proven Linux expertise and support is what customers such as University of Delaware are looking for," says Charles Wuischpard, CEO Penguin Computing. "This new generation of AMD Opteron processors offers interesting new features and a great core density. Penguin's customers in the High Performance and Enterprise Computing space will greatly benefit from this new processor architecture."
"Our new AMD Opteron 6200 and 4200 processors deliver unmatched core density that will greatly benefit Penguin Computing's customer base," says Paul Struhsaker, corporate vice president and general manager, Commercial Business at AMD. "While core count is important, a balanced processor architecture that ensures sufficient memory bandwidth for all cores is essential. AMD Opteron 6200 Series processors provide a compelling valuable proposition for HPC customers."
About Penguin Computing
For well over a decade Penguin Computing has been dedicated to delivering complete, integrated high performance computing (HPC) solutions that are innovative, cost effective, and easy to use. Penguin offers a complete end-to-end portfolio of products and solutions ranging from Linux servers and workstations to integrated, turn-key HPC clusters and cluster management software. For those who want to use supercomputing capabilities on-demand and pay as they go, Penguin offers Penguin on Demand (POD), a public HPC cloud that is available instantly and as needed. With its broad portfolio of solutions Penguin is the one-stop shop for HPC and enterprise customers and counts some of the world's most demanding HPC users as its customers, including Caterpillar, Life Technologies, Dolby, Lockheed Martin, the US Air Force, and the US Navy.
Source: Penguin Computing
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.