This is an old revision of the document!
The value of using static code attributes to learn defect predictors has been widely debated. Prior work has explored issues like the merits of “McCabes versus Halstead versus lines of code counts” for generating defect predictors. We show here that such debates are irrelevant since how the attributes are used to build predictors is much more important than which particular attributes are used. Also, contrary to prior pessimism, we show that such defect predictors are demonstrably useful and, on the data studied here, yield predictors with a mean probability of detection of 71 percent and mean false alarms rates of 25 percent. These predictors would be useful for prioritizing a resource-bound exploration of code that has yet to be inspected.
Yann-Gaël Guéhéneuc, 2014/02/12
The authors make the case that, when it comes to defect prediction, “how the attributes are used to build predictors is much more important than which particular attributes are used”. To come this this conclusion, they use 38 attributed and three different learners: OneR, J48, and naïve Bayes. They use as dataset the NASA MDP (Metric Data Program). They show that they can build a defect predictor that has “a mean probability of detection of 71 percent and mean false alarms rates of 25 percent”.
They start by justifying the need for such predictors: “These potential defect-prone trouble spots can then be examined in more detail by, say, modelchecking, intensive testing, etc.”