Visit additional Tabor Communication Publications
July 14, 2006
The task of debugging huge computer programs can be made faster and easier by using new software tools developed by programming experts at the University of Illinois at Urbana-Champaign.
Computer science professor Yuanyuan Zhou and her students have assembled a suite of software tools that can find and correct bugs by inferring the programmer's intentions. The tools draw from observations on how programmers write code.
"Most bug-detection tools require reproduction of bugs during execution," Zhou said. "The program is slowed down significantly and monitored by these tools, which watch for certain types of abnormal behavior. Most of our tools, however, work by only examining the source code for defects, requiring little effort from programmers."
Copy-pasted code, for example, often appears in large programs. While saving considerable programming effort, copy-pasted code can be the source of numerous bugs. Zhou's copy-paste tool, called CP-Miner, uses data-mining techniques to find copy-pasted code in the program and examine and correct that code for consistent modifications.
CP-Miner has found many bugs in the latest versions of large open-source software used in the information technology industry, Zhou said. CP-Miner is fast and efficient -- it can scan 3-4 million lines of code for copy-paste and related bugs in less than 30 minutes.
Large programs also tend to follow many implicit rules and assumptions, so Zhou and her students developed a related tool, called PR-Miner, to detect when those rules have been broken. Like CP-Miner, PR-Miner is based on data-mining techniques.
"First, we mine the source code for patterns, repetitions and correlations that point to implicit programming rules and assumptions," Zhou said. "Then we check that those rules and assumptions have not been violated."
PR-Miner also is very fast and has found many bugs in the latest open-source software. It takes PR-Miner only a few minutes to scan 4 million lines of code.
Not only are their efforts directed toward detecting, diagnosing and fixing bugs, Zhou and her students also are exploring techniques that allow software to survive in the presence of bugs. Rx, for example, is a recovery tool that allows software to survive by treating bugs like allergies.
"If you are allergic to cats, you try to avoid cats," Zhou said. "In much the same way, Rx is avoidance therapy for software failure. If the software fails, Rx rolls the program back to a recent checkpoint, and re-executes the program in a modified environment."
A fourth tool, called Triage, diagnoses software failures at the end-user site. Following a human-like diagnosis protocol, Triage rapidly identifies the nature of the problem and provides valuable input to help programmers quickly understand the failure and fix the bug.
"If something bad happens or the software crashes, Triage's diagnosis protocol will start automatically and quickly suggest a temporary fix until programmers can release a fixing patch," Zhou said.
In addition to being fast and efficient, Zhou's software tools are scalable and can be tailored for specific software programs, including programs running on parallel processors.
The work was funded by the National Science Foundation and the Intel Corp.
Source: James Kloeppel, University of Illinois
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.