Visit additional Tabor Communication Publications
March 18, 2013
WARWICK, United Kingdom, March 18 — Allinea Software has today cracked the performance profiling pain barrier with the release of Allinea MAP, a powerful performance-analysis tool easy enough for scientists to diagnose problems in their own code.
“People have tried to make a profiler that scientists could use themselves, but until now no commercial company has put the time and resources into getting the user experience right,” says Mark O’Connor, Allinea Software’s VP of Product Management.
"In 2011, we recognized that software development would be one of the most serious challenges to scaling applications to the levels of extreme core counts that our scientific goals require”, recalls François Robin, IFERC/CSC Project Leader for CEA. “We needed not only highly scalable debugging tools, but also a capable and easy-to-use performance tool that could reach the extreme. Allinea Software's strong reputation for scalable development tools and for collaboration made them the obvious partner that could deliver."
No compiling – just clear results
Allinea MAP runs without the need to instrument or compile with special options. The program annotates the source code with performance information in colored graphs so users can see any problems at a glance.
“Our scientific users immediately grasp the graphical nature used to show how time spent on computation and communication varies for each line of code - this gives them an intuitive understanding of where problems lie so they can assess whether to call in an HPC performance expert for help,” says O’Connor.
Scalable but not data intensive
Allinea MAP is a lightweight application that adds little overhead even when scaled up to profile tens of thousands of processes.
“People assumed if you were to profile at scale, you’d need to store huge amounts of data — gigabytes or even terabytes. They were amazed that Allinea MAP stores only 10 to 20 megabytes,” says O’Connor.
Allinea Software is also the creator of Allinea DDT, a popular debugging tool proven at more than 700,000 cores and installed on the majority of the top supercomputers. Allinea MAP is built on the same infrastructure, making it possible to work on a very large scale while adding only 5% to the total runtime.
“I think visual tools like Allinea MAP are the only way forward as we approach the daunting complexity of exascale computing,” says Rich Brueckner, president of the popular insideHPC news blog.
“Algorithms that scale at hundreds or thousands of nodes tend to behave very differently at ultra-scale, where one has tens of thousands or even millions of nodes to contend with,” says Brueckner. “How one tackles such a problem requires new approaches and ways of thinking. You are never going to make parallel computing easy. What you can do is give the programmer a way to navigate in an ocean of code.”
Allinea MAP can be combined with Allinea DDT, sharing a single interface, so when Allinea MAP shows where performance bottlenecks are forming, you can flip to the Allinea DDT view and step through the code to find the source of the problem.
A worldwide collaborative process
Allinea Software crowd-sourced the development of Allinea MAP with industry experts all over the world giving feedback on each iteration, from the whiteboard stage up to the release.
Oliver Perks and David Beckingsale, members of the University of Warwick’s Performance Computing and Visualisation Group, helped Allinea MAP evolve from a tool emphasizing source code to the visually rich interface it sports today.
“The fact you can see the visual overview without delving into the numbers is incredibly useful - Allinea MAP saves days in preparing code for a supercomputer,” says Perks.
Steven Jarvis, head of the High Performance Systems Group at Warwick University agrees - "Allinea MAP provides access to key performance metrics in a lightweight and intuitive framework, allowing us to profile code performance faster."
With a tool that helps scientists see for themselves how performance bottlenecks form, the aim is to foster a culture of performance awareness rather than just fixing problems when they become too large to ignore.
“A lot of code out there is performing badly because the people who write and run it don’t have tools to rapidly and regularly analyze it. We’ve had HPC experts tell us they have to correct the same basic mistakes time after time,” says O’Connor. “A single optimization found with Allinea MAP can save hundreds of thousands of core hours over the lifetime of the code, delivering results faster and letting scientists focus on their real work instead of fighting the tools.”
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.