Visit additional Tabor Communication Publications
May 30, 2012
COLUMBUS, Ohio, May 30 -- The Ohio Supercomputer Center’s newest system would fall in the top half of the list of the world’s most powerful supercomputers based purely on speed, but the cluster would rank even higher – ninth in the United States and second among U.S. academic institutions – when comparing benchmarked performances against the maximum theoretical performance of the system.
“Major investments, such as the one made to purchase the new HP/Intel supercomputer, must be made carefully,” noted Jim Petro, chancellor of the Ohio Board of Regents – the state agency that created the Ohio Technology Consortium to oversee the Ohio Supercomputer Center (OSC) and other statewide technology resources. “These benchmark statistics show me that we’ve not only given our researchers a powerful resource with this investment, but also have given Ohio taxpayers a great value.”
Data from the most recently compiled list of the TOP500 project would rank the Ohio Supercomputer Center’s new Oakley Cluster as the 180th fastest supercomputer in the world, but also as the 22nd highest rated system in the world when comparing actual benchmark performances against the maximum theoretical performance of the system. (OSC/MacConnell)
OSC engineers recently opened the Oakley Cluster to general users after they benchmarked the performance of their new system, based on 694 HP ProLiant SL390 G7 servers. The engineers compared their benchmark results with the latest list of the world’s fastest supercomputers, generated twice each year by the Top500 Project. The international TOP500 project was started in 1993 to provide a reliable basis for tracking and detecting performance trends in supercomputing. Based on the comparisons, OSC’s new system would rank as the 180th fastest supercomputer in the world, 89th in the United States and 11th among U.S. academic institutions.
More interestingly however, using additional data compiled by Top 500 Project, OSC engineers analyzed how well the supercomputers on the list performed in the benchmark tests, compared to the maximum theoretical performance of the machine.
In those comparisons, the Oakley Cluster ranked much higher – 22nd in the world, 9th in the United States and 2nd among U.S. academic institutions.
“What these performance efficiency findings tell us is that HP and OSC working together were able to optimize the performance of the machine to deliver more compute capability to our users.” said Kevin Wohlever, director of supercomputer operations at OSC. “We worked very closely with HP, Intel and other subcontractors for several months to design a highly efficient system that would meet the needs of our user-communities, and all our careful design work has paid off.”
OSC’s HP Intel® Xeon® processor based supercomputer, named after the famous Ohio sharpshooter Annie Oakley, can achieve 88 teraflops, which is tech-speak for performing 88 trillion calculations per second.
The new system features more cores (8,328) on half as many nodes (694) as the center’s former flagship system, an IBM Opteron™ 1350 Glenn Cluster. The Oakley Cluster provides nearly twice the memory per core (4 gigabytes) than Glenn and three times the number of graphic processing units or ‘GPUs’ (128). Oakley also provides researchers with one and a half times the performance of the Glenn Cluster at just 60 percent of Glenn’s power consumption.
In addition, while not used in these efficiency studies, the OSC system has the ability to support NVIDIA® Tesla™ 2070 graphic processing units (GPUs), giving the system a total peak performance of 154 teraflops. And, the addition of 600 terabytes of new DataDirect Lustre storage expands OSC storage to nearly two petabytes.
OSC is a state-funded high performance computing center that provides Ohio’s universities, industries and other clients with computation, software, storage and support services. OSC’s centralized support increases the opportunities for researchers statewide to innovate and successfully compete for grants and national supercomputing resources. Major users of OSC’s resources have focused on research in the areas of the biosciences, advanced materials, energy and the environment.
About the Ohio Supercomputing Center
The Ohio Supercomputer Center (OSC), a member of the Ohio Technology Consortium of the Ohio Board of Regents, addresses the rising computational demands of academic and industrial research communities by providing a robust shared infrastructure and proven expertise in advanced modeling, simulation and analysis. OSC empowers scientists with the vital resources essential to make extraordinary discoveries and innovations, partners with businesses and industry to leverage computational science as a competitive force in the global knowledge economy, and leads efforts to equip the workforce with the key technology skills required to secure 21st century jobs. For more, visit www.osc.edu.
About the Ohio Board of Regents
The Ohio Board of Regents is the state agency that coordinates higher education in Ohio, and its Chancellor, who is a member of the Governor of Ohio’s cabinet, directs the agency. The Chancellor, with the advice of the nine-member board, provides policy guidance to the Governor and the Ohio General Assembly, advocates for the University System of Ohio and carries out state higher education policy. For more, visit www.ohiohighered.org.
Source: The Ohio Supercomputing Center
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.