CSAIL Publications and Digital Archive header
bullet Research Abstracts Home bullet CSAIL Digital Archive bullet Research Activities bullet CSAIL Home bullet

link to publications.csail.mit.edu link to www.csail.mit.edu horizontal line

 

Research Abstracts - 2007
horizontal line

horizontal line

vertical line
vertical line

Prioritizing Warnings by Mining Software History

Sunghun Kim & Michael D. Ernst

Introduction

Automatic bug finding tools tend to have high false positive rates: most warnings do not indicate real bugs. Usually bug finding tools prioritize each warning category. For example, the priority of "overflow" is 1 or the priority of "jumbled incremental" is 3, but the tools' prioritization is not very effective [4].

We develop a warning prioritization algorithm by mining previous warning fix experience recorded in the software change history. The underlying intuition is that if warnings from a category are resolved quickly by developers, the warnings in the category are important. If warnings from a category are removed during fix changes, the warnings are important as well.

Problem Description

Bug-finding tools such as FindBugs [3], JLint [1], and PMD [2] analyze source or binary code and warn about potential bugs. These tools tend to have a high rate of false positives: most warnings do not indicate real bugs. To focus on more important warnings, these tools usually prioritize warning categories to put likely false positives at the bottom of the list, but these tools' prioritization is not very effective [4]. Kremenek and Engler report that 30%~100% of warnings are false positives [4].

Research Goals

Our research goal is to develop a generic warning reprioritization algorithm by mining the software change history. This generic warning prioritization algorithm puts important warnings on the top of the warning list and enables developers to focus on the important warnings.

Technical Approach

We set a weight for each warning category and train the weights by mining the software change history. After training weights, we prioritize warning categories based on their category weights.

The basic idea of training weights for categories is taking each warning instance as a bug predictor. If the prediction is correct (the warning is removed in a fix change), we promote the weight. Similarly, a weight of a warning category promotes, if a warning instance from the warning category is removed in non-fix changes.

The proposed warning prioritization algorithm is described in Figure 1. The initial weight of c category, wc is set to 0. After that if a warning instance in a category c, is removed during a fix change, the weight is promoted by a. Similarly if a warning instance in a category c, is removed during a non-fix change, the weight is promoted by b. Since there are two promotion steps, the weights will be decided by the ratio of a and b rather than the actual values of a and b. a is an independent variable and b depends on a (b = 1-a).

Initial step: wc = 0
Fix promotion step: wc= wc+a, 
   if a warning instance from a category c is removed in a fix change.
Change promotion step: wc= wc+b, 
   if a warning instance form a category c is removed in a non-fix change.
a and b conditions:
   0 < a < 1, 0 < b < 1, b = 1-a

Figure 1. Proposed prioritization algorithm using change history.

A warning category gets a high weight, if warning instances from the category are removed many times in fix changes or non-fix changes. In contrast, a warning category gets a low weight if warning instances from the category removed seldom.

References

[1] C. Artho, "Jlint - Find Bugs in Java Programs," 2006.

[2] T. Copeland, PMD Applied: Centennial Books, 2005.

[3] D. Hovemeyer and W. Pugh, "Finding Bugs is Easy," proceedings of the 19th Object Oriented Programming Systems Languages and Applications (OOPSLA '04), Vancouver, British Columbia, Canada, 2004.

[4] T. Kremenek and D. R. Engler, "Z-ranking: Using statistical analysis to counter the impact of static analysis approximations," proceedings of the 10th International Symposium on Static Analysis (SAS 2003), San Diego, CA, USA, 2003.

vertical line
vertical line
 
horizontal line

MIT logo Computer Science and Artificial Intelligence Laboratory (CSAIL)
The Stata Center, Building 32 - 32 Vassar Street - Cambridge, MA 02139 - USA
tel:+1-617-253-0073 - publications@csail.mit.edu