Visit additional Tabor Communication Publications
September 18, 2008
Despite the popularity of the Linpack benchmark, the majority of HPC users have already moved past a pure performance mentality. The most popular metrics being bandied about today are price-performance and performance-per-watt. But that still restricts our view of HPC investments to a relatively narrow aspect of system costs and benefits.
Over the past few years, there has been a lot of interest in looking at overall productivity as a way to manage and optimize computing investments. But that's where it gets squishy. Productivity is an abstract concept. Whereas everyone can more or less agree on how many teraflops a given system is capable of, it's much more difficult to measure how productive that system can be on a day-to-day basis. And to be useful, productivity has to incorporate all the facets of the HPC environment: humans, software and hardware. Devising a set of metrics to quantify those elements is the key.
As a principle proponent of the high productivity computing meme, Tabor Research has been evangelizing this approach to HPC since 2007. A handful of HPC vendors, as well as this publication, have jumped on the productivity bandwagon with varying degrees of enthusiasm. [Disclaimer: Tabor Research and Tabor Publications, publisher of HPCwire, are both owned by Tabor Communication, Inc.]
On Thursday, Tabor Research launched its HPC Productivity Analyzer, an online tool designed to help HPC lab directors and datacenter managers evaluate and improve their computing investments. "We're extremely exited about offering this to users," said Addison Snell, vice president and general manager of Tabor Research. "We've talked about productivity for years, but it's always been a hand-wavy kind of concept. This is the first methodology that takes a quantifiable look at how productive your HPC ecosystem is."
The concept for the tool grew from a research project with Microsoft in which the software giant was looking for ways to quantify productivity. Tabor Research took some of the early ideas from that engagement and evolved them into the HPC Productivity Analyzer.
Essentially the tool is a survey that collects information about the nature of your HPC infrastructure and the organization that surrounds it. As you enter the survey, you first fill out a site profile for basic information about the type and size of your organization, as well as the general nature of your IT hardware. At that point, you drop into the survey proper, which guides you through a series of ten questions that are used to capture organizational priorities and the way your HPC systems are being used.
Snell says the most critical information is captured in the first couple of questions, which ask you to choose the three most important metrics that you believe are driving productivity at your site, and to rank the purchase criteria for selecting HPC systems. The remaining questions address cost considerations, standards, software usage, physical prototyping, organizational structure, and funding.
Next comes workflow analysis. Here you estimate the workflow breakdown for three different roles: the end user, the system administrator, and the application developer. Each workflow is role-specific. For example, only the application developer has a coding phase. (If your system admin is spending time writing code, you have more fundamental problems than optimizing productivity.) The workflow analysis component is the slickest part of the tool.
The interface is very intuitive. With the mouse, you drag the edge of the workflow phase boxes to increase or decrease the relative amount of time you think is being spent in each phase. The other boxes adjust auto-magically so that the entire workflow always adds up to 100 percent. Hovering over a phase box lists the tasks mapped to that particular one. And if you click on the box, a secondary set of boxes appears that allows you to specify task allocations under that phase.
The visual aspect of the workflow analysis component makes it easy for the user to get an accurate reflection of time allocation. According to Snell, beta users of the tool found that the exercise of estimating workflow allocation was instructive in itself. (Do we really spend only 10 percent of our maintenance phase implementing bug fixes?) Most of them hadn't fully considered where their time is actually getting spent, said Snell.
After completing the questionnaire and workflow analysis, hitting the Submit button will display your productivity results and offer some recommendations. The first set of results compares your workflow allocation to those of your peers in the general sector you occupy (industry, government or academia) and to those of your peers in all HPC sectors. The analysis focuses on workflow allocations that may be out of line relative to your peers, and tells you why this is important to your organization. For example, application development becomes more important if you're relying on in-house code versus ISV codes. Likewise, anything having to do with system administration becomes more important if you consider admin costs to be significant to your TCO.
Based on the results, the recommendations offer ways for you to optimize your workflow. Internally, the tool draws on a library of several dozen recommendations that map to particular scenarios. Snell said the library will continue to grow and become more refined as more data is collected for specific circumstances. "We are picking recommendations based on what phase you're having trouble with and other factors having to do with your type of organization," he explained. "So that is the secret sauce -- that plus the overall methodology on how to evaluate productivity. Nobody's ever quantified this before."
The peer database has been populated by over 100 early access users, and as more people exercise the tool, the database will be updated dynamically. Since this is version 1.0 of the HPC Productivity Analyzer, user feedback is being sought, both to improve the interface and to refine the methodology. The tool is free to use after registration and is available at http://www.HPCproductivity.com if you want to give it a whirl.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
Jun 13, 2013 |
Titan, the Cray XK7 at the Oak Ridge National Lab that debuted last fall as the fastest supercomputer in the world with 17.59 petaflops of sustained computing power, will rely on its previous LINPACK test for the upcoming edition of the Top 500 list.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.