Visit additional Tabor Communication Publications
August 01, 2012
Argonne National Laboratory is home to Mira, currently the third fastest system on TOP500 list. The 48-rack IBM Blue Gene/Q supercomputer runs on 768,000 cores and cranks out more than 8 Linpack petaflops. While Mira is not fully operational yet, applications are already being optimized to run on the machine. Today, InformationWeek detailed a number of workloads the system is expected to handle.
Under the umbrella of Argonne’s Early Science Program, Mira will be assisting research in earthquake modeling, quantum mechanics, the effect of clouds on the climate, and materials science. These applications, along with others in the Early Science Program, should help researchers judge the system’s capabilities.
Mike Papka, the deputy associate director of the lab’s computing, environment and life sciences directorate, explained the how applications would be ramped on Mira. "A new architecture with a new system software stack, and at a scale that is larger than anyone else has run previously, results in a system that will have issues never seen before,” he said. “These issues need to be exposed and addressed before we go into production, and it often requires real users running real code on the system."
Mira will be taking over for the Intrepid supercomputer, a Blue Gene/P machine. Back in 2008, the system took number four on the TOP500 at 458 Linpack teraflops. Intrepid was used for an “immediate need” project during the summer of 2010, when researchers ran simulations of oil rising through water, in response to the Deepwater Horizon oil spill disaster.
Intrepid will stay online until Mira becomes fully operational, at which point the system will most likely get decommissioned. The laboratory cannot support the operational costs of both systems, so Intrepid may get sold to a university or simply get stripped down for parts.
According to the artice, 60 percent of Mira’s cycles will be allocated to the DOE’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. The project allows researchers from industry, government and academia to submit proposals to a panel. The panel then reviews the proposals, determining which applications display the most relevance to the program with computationally-ready software.
The Advanced Science Computing Research Challenge accounts for another 30 percent of Mira’s computing time. This program works on issues aligned with the DOE’s energy priorities. Cycles related to the challenge will be allocated in June 2013.
The left over resources will be reserved for “immediate need” workloads like the Intrepid’s oil spill application.
Full story at InformationWeek
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.