Visit additional Tabor Communication Publications
HPC Matters is a joint blog consisting of contributors from the Tabor Communications team on their observations and insights into HPC matters.
January 18, 2011
Last month, the President's Council of Advisors on Science and Technology (PCAST) -- 20 of the nation's leading scientists and engineers selected by the President -- released a report, entitled "Designing a Digital Future: Federally Funded Research and Development in Networking and Information Technology." The council argues that networking and information technology is a key enabler of economic competitiveness, national security and quality of life, and therefore should be appropriately funded. Don't let the humdrum summary fool you, there are revolutionary ideas afoot in this report, and William Gropp, professor of computer science at the University of Illinois, provides a rundown on those relevant to high performance computing. Gropp has a pretty good understanding of the material since he was also part of the team that authored the report.
One of the key claims in the report is that the TOP500 list alone is not a significant indicator of HPC prowess.
While the HPC community has long known that no single benchmark adequately captures the usefulness of a system, the PCAST report explicitly calls for a greater focus on what I'll call sustained performance: the ability to compute effectively on a wide range of problems:
"But the goal of our investment in HPC should be to solve computational problems that address our current national priorities,"
Addressing this is becoming critical, because developing systems based solely to rank at the top of the Top500 list will not provide the computational tools needed for productive science and engineering research.
Gropp asserts that the business as usual approach to high-end computing will no longer be effective, and that for HPC to continue to advance, a dramatic revamping will be required in all parts of the ecosystem: the hardware, software and algorithms. If this overhaul fails to happen, Gropp opines that the end of Moore's Law and the relatively-painless progress that goes with it may really be at hand.
To avoid this fate, the report calls for "substantial and sustained" investment in a broad range of basic research for HPC, specifically:
"To lay the groundwork for such systems, we will need to undertake a substantial and sustained program of fundamental research on hardware, architectures, algorithms and software with the potential for enabling game-changing advances in high-performance computing."
Gropp concludes his analysis with a sobering glimpse into the future of HPC:
Without a sustained investment in basic research into HPC, the historic increase in performance of HPC systems will slow down and eventually end. With such an investment, HPC will continue to provide scientists and engineers with the ability to solve the myriad of challenges that we face.
It's easy to dismiss Gropp's prediction as doom-and-gloom rhetoric understandably intended to galvanize resources but in a way I think he's right. I don't think anyone wants to see HPC's demise, but the likely scenario is that we will carry on doing business as usual, making incremental changes and tradeoffs and avoiding the really hard challenges until absolutely forced to do otherwise. I don't think that we'll see really big changes, unless we hit the rock bottom of stalled performance. Or unless HPC experiences a game-changing breakthrough that recasts the trajectory of its progress. These types of scientific leaps can't be predicted, but increased support at the federal level increases their likelihood.
Posted by Tiffany Trader - January 18, 2011 @ 4:30 PM, Pacific Standard Time
Tiffany Trader is the editor of HPC in the Cloud. With a background in HPC publishing, she brings a wealth of knowledge and experience to bear on a range of topics relevant to the technical cloud computing space.
No Recent Blog Comments
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.