Visit additional Tabor Communication Publications
December 08, 2010
Dec. 8 -- In modern science and engineering, as in college football, sometimes the bigger your lineup, the more you win. Winning in research -- making discoveries -- requires massive amounts of computing capacity. In this area, Purdue has one of the best front lines in the nation.
A recent ranking of the largest supercomputers in the world -- performed biannually by www.TOP500.org -- listed two Purdue supercomputers, Rossmann and Coates, as being in the world's top 150 largest machines.
When compared with other academic research institutions in the United States, Purdue places fifth in the nation in computing capacity.
Purdue's ranking will be announced today (Dec. 8) at the Cyberinfrastructure Days conference.
Gerry McCartney, chief information officer, vice president for information technology at Purdue and the Olga Oesterle England Professor of Information Technology, says Purdue has aggressively developed computing infrastructure to meet the needs of the world-class research being done by Purdue faculty.
"Although we're pleased to be ranked so high, this is about creating the nation's best environment for science and engineering research, not about whose machine is bigger," McCartney said. "The research being done by Purdue faculty demands massive amounts of computing capacity, and we are meeting that demand."
Research computing capacity at Purdue ranked behind three centers that support faculty members nationally -- National Institute for Computational Sciences, which is located at the University of Tennessee; the Texas Advanced Computing Center, which is located at the University of Texas; and Georgia Institute of Technology, commonly known as Georgia Tech. Only one campus-based resource ranks higher -- the University of Colorado.
"Looking at that list, Purdue stands out because our computing resources are available to campus researchers. Our faculty don't have to wait in line behind researchers from across the nation to get their work done," McCartney said. "If you narrow the list to only local resources available to campus faculty immediately, Purdue is currently leading the nation."
Purdue's Rossmann cluster supercomputer ranked 126th on the latest list, which is compiled by the international TOP500 Supercomputer Sites project. Rossmann was built in September 2010. Coates, which was built in 2009, ranked 147th.
Purdue also has a third supercomputer, Steele, built in 2008, which fell just outside of the top 500 and was not used in this ranking. Steele ranked as the 105th largest supercomputer in the world in 2008.
The supercomputers are operated by the Rosen Center for Advanced Computing, a division of Information Technology at Purdue, known on campus as ITaP (pronounced EYE-tap).
Since 2008, Purdue has been funding the purchase of the supercomputer through a campus cooperative purchasing arrangement in which research faculty pool grant funds; the so-called community cluster program received an Innovators Award from Campus Technology magazine in 2010.
Purdue plans to build a fourth community cluster supercomputer in 2011, and a fifth in 2012.
"In order to remain competitive for top research grants, Purdue must continue to build the best campus environment for scientists and engineers," McCartney said. "Our faculty realize this and have worked together and with ITaP to make this a reality. This is one of the secrets of Purdue's success."
Source: Purdue University
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.