User Tools

Site Tools


data_mining_static_code_attributes_to_learn_defect_predictors

This is an old revision of the document!


Menzies, T.; Greenwald, J. & Frank, A. Data Mining Static Code Attributes to Learn Defect Predictors. Transactions on Software Engineering, IEEE CS Press, 2007, 33, 2-13

Abstract

The value of using static code attributes to learn defect predictors has been widely debated. Prior work has explored issues like the merits of “McCabes versus Halstead versus lines of code counts” for generating defect predictors. We show here that such debates are irrelevant since how the attributes are used to build predictors is much more important than which particular attributes are used. Also, contrary to prior pessimism, we show that such defect predictors are demonstrably useful and, on the data studied here, yield predictors with a mean probability of detection of 71 percent and mean false alarms rates of 25 percent. These predictors would be useful for prioritizing a resource-bound exploration of code that has yet to be inspected.

Comments

Yann-Gaël Guéhéneuc, 2014/02/12

The authors make the case that, when it comes to defect prediction, “how the attributes are used to build predictors is much more important than which particular attributes are used”. To come this this conclusion, they use 38 attributed and three different learners: OneR, J48, and naïve Bayes. They use as dataset the NASA MDP (Metric Data Program). They show that they can build a defect predictor that has “a mean probability of detection of 71 percent and mean false alarms rates of 25 percent”.

The authors start by justifying the need for such predictors: “These potential defect-prone trouble spots can then be examined in more detail by, say, modelchecking, intensive testing, etc.” Then, they justify the use of static code metrics, against Fenton and Pfleeger's “insightful example where the same functionality is achieved using different programming language constructs resulting in different static measurements for that module” by stating that static code metrics “are useful as probabilistic statements that the frequency of faults tends to increase in code modules that trigger the predictor” and that “[w]e [should] actively [research] better code metrics which, potentially, [would] yield “better” predictors.” But, before finding “better” code metrics, the authors argue that we need a baseline for prediction models. They propose such a baseline in their paper.

data_mining_static_code_attributes_to_learn_defect_predictors.1392467001.txt.gz · Last modified: 2019/10/06 20:37 (external edit)