Visit additional Tabor Communication Publications
December 06, 2011
Poznan Supercomputing and Networking Center to deploy 224 peak teraflop GPU cluster
FREMONT, Calif., Dec. 5 -- SGI, the trusted leader in technical computing, today announced that the Poznan Supercomputing and Networking Center (PSNC) headquartered in Poznan, Poland and affiliated with the Institute of Bioorganic Chemistry at the Polish Academy of Sciences, has purchased an SGI high performance computing (HPC) solution. In total, the system purchased will deliver 224 peak teraflops of performance and will enable advanced academic and scientific research throughout Poland.
The system chosen consists of an SGI Rackable C1103-G15 cluster utilizing AMD Opteron 6200 series processors, and includes 120 servers each with one NVIDIA M2050 GPU per node and another 107 servers with two M2050 GPUs per server. The full configuration will feature 12.6 TB of memory, 5448 CPU cores and 149,632 GPU CUDA cores, and is intended to satisfy the increasing performance requirements of Poznan Supercomputing Center users.
"We decided to go for the combination of the most modern CPU technology together with proven GPU technology," said Norbert Meyer, director of the Supercomputing facility at Poznan. "Such a combination allows us to ensure optimal parameters for our users. This system, which was financed from structural funds coming from POWIEW and PL-GRID national projects and will be also used in the PRACE (EU project), allowed us to reach 63 teraflops, enough for a place on the TOP500 list today, with less than half of nodes installed for the test. The full configuration will feature more than twice as many more nodes than the measured system, which ensures enough performance for our users as well as a good position on June 2012 TOP500 list."
PSNC has been the leader in implementing innovative technologies for the national scientific network during the last 20 years, and is at present in the network PIONIER -- Polish Optical Internet. The HPC center within PSNC provides computing power, disk space and archiving systems for science, business and public institutions, and appears already several times on the TOP500 list of the most powerful computing systems in the world. The computing capacity includes systems with distributed and shared memory of various architectures including parallel vector, multi-processor SMPs (including the SGI UV 1000 with 2048 cores and 16TB of shared memory) and clusters, connected via fast local networks including InfiniBand, Gigabit Ethernet and Fast Ethernet.
"The practice of combining of CPUs and GPUs to optimize price and performance continues to grow," said Bill Mannel, vice president of product marketing at SGI. "Allowing our customers to choose specific configurations reinforces our commitment to providing the most flexible product offerings to meet their data-intensive, technical computing workload challenges."
SGI, the trusted leader in technical computing, is focused on helping customers solve their most demanding business and technology challenges. Visit sgi.com for more information.
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.