An experiment to assess the cost-benefits of code inspections in large scale software development

Adam A. Porter, Harvey P. Siy, Carol A. Toman, Lawrence G. Votta

Research output: Contribution to journalArticle

79 Citations (Scopus)

Abstract

We conducted a long-term experiment to compare the costs and benefits of several different software inspection methods. These methods were applied by professional developers to a commercial software product they were creating. Because the laboratory for this experiment was a live development effort, we took special care to minimize cost and risk to the project, while maximizing our ability to gather useful data. This article has several goals: 1) to describe the experiment's design and show how we used simulation techniques to optimize it, 2) to present our results and discuss their implications for both software practitioners and researchers, and 3) to discuss several new questions raised by our findings. For each inspection, we randomly assigned three independent variables: 1) the number of reviewers on each inspection team (1, 2, or 4), 2) the number of teams inspecting the code unit (1 or 2), and 3) the requirement that defects be repaired between the first and second team's inspections. The reviewers for each inspection were randomly selected without replacement from a pool of 11 experienced software developers. The dependent variables for each inspection included inspection interval (elapsed time), total effort, and the defect detection rate. Our results showed that these treatments did not significantly influence the defect detection effectiveness, but that certain combinations of changes dramatically increased the inspection interval.

Original languageEnglish (US)
Pages (from-to)329-346
Number of pages18
JournalIEEE Transactions on Software Engineering
Volume23
Issue number6
DOIs
StatePublished - Dec 1 1997

Fingerprint

Software engineering
Inspection
Costs
Experiments
Defects

Keywords

  • ANOVA
  • Controlled experiments
  • Industrial experimentation
  • Power analysis
  • Software inspection

ASJC Scopus subject areas

  • Software

Cite this

An experiment to assess the cost-benefits of code inspections in large scale software development. / Porter, Adam A.; Siy, Harvey P.; Toman, Carol A.; Votta, Lawrence G.

In: IEEE Transactions on Software Engineering, Vol. 23, No. 6, 01.12.1997, p. 329-346.

Research output: Contribution to journalArticle

@article{59d3d77ed70b4374be3d8df41eb7ccf5,
title = "An experiment to assess the cost-benefits of code inspections in large scale software development",
abstract = "We conducted a long-term experiment to compare the costs and benefits of several different software inspection methods. These methods were applied by professional developers to a commercial software product they were creating. Because the laboratory for this experiment was a live development effort, we took special care to minimize cost and risk to the project, while maximizing our ability to gather useful data. This article has several goals: 1) to describe the experiment's design and show how we used simulation techniques to optimize it, 2) to present our results and discuss their implications for both software practitioners and researchers, and 3) to discuss several new questions raised by our findings. For each inspection, we randomly assigned three independent variables: 1) the number of reviewers on each inspection team (1, 2, or 4), 2) the number of teams inspecting the code unit (1 or 2), and 3) the requirement that defects be repaired between the first and second team's inspections. The reviewers for each inspection were randomly selected without replacement from a pool of 11 experienced software developers. The dependent variables for each inspection included inspection interval (elapsed time), total effort, and the defect detection rate. Our results showed that these treatments did not significantly influence the defect detection effectiveness, but that certain combinations of changes dramatically increased the inspection interval.",
keywords = "ANOVA, Controlled experiments, Industrial experimentation, Power analysis, Software inspection",
author = "Porter, {Adam A.} and Siy, {Harvey P.} and Toman, {Carol A.} and Votta, {Lawrence G.}",
year = "1997",
month = "12",
day = "1",
doi = "10.1109/32.601071",
language = "English (US)",
volume = "23",
pages = "329--346",
journal = "IEEE Transactions on Software Engineering",
issn = "0098-5589",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "6",

}

TY - JOUR

T1 - An experiment to assess the cost-benefits of code inspections in large scale software development

AU - Porter, Adam A.

AU - Siy, Harvey P.

AU - Toman, Carol A.

AU - Votta, Lawrence G.

PY - 1997/12/1

Y1 - 1997/12/1

N2 - We conducted a long-term experiment to compare the costs and benefits of several different software inspection methods. These methods were applied by professional developers to a commercial software product they were creating. Because the laboratory for this experiment was a live development effort, we took special care to minimize cost and risk to the project, while maximizing our ability to gather useful data. This article has several goals: 1) to describe the experiment's design and show how we used simulation techniques to optimize it, 2) to present our results and discuss their implications for both software practitioners and researchers, and 3) to discuss several new questions raised by our findings. For each inspection, we randomly assigned three independent variables: 1) the number of reviewers on each inspection team (1, 2, or 4), 2) the number of teams inspecting the code unit (1 or 2), and 3) the requirement that defects be repaired between the first and second team's inspections. The reviewers for each inspection were randomly selected without replacement from a pool of 11 experienced software developers. The dependent variables for each inspection included inspection interval (elapsed time), total effort, and the defect detection rate. Our results showed that these treatments did not significantly influence the defect detection effectiveness, but that certain combinations of changes dramatically increased the inspection interval.

AB - We conducted a long-term experiment to compare the costs and benefits of several different software inspection methods. These methods were applied by professional developers to a commercial software product they were creating. Because the laboratory for this experiment was a live development effort, we took special care to minimize cost and risk to the project, while maximizing our ability to gather useful data. This article has several goals: 1) to describe the experiment's design and show how we used simulation techniques to optimize it, 2) to present our results and discuss their implications for both software practitioners and researchers, and 3) to discuss several new questions raised by our findings. For each inspection, we randomly assigned three independent variables: 1) the number of reviewers on each inspection team (1, 2, or 4), 2) the number of teams inspecting the code unit (1 or 2), and 3) the requirement that defects be repaired between the first and second team's inspections. The reviewers for each inspection were randomly selected without replacement from a pool of 11 experienced software developers. The dependent variables for each inspection included inspection interval (elapsed time), total effort, and the defect detection rate. Our results showed that these treatments did not significantly influence the defect detection effectiveness, but that certain combinations of changes dramatically increased the inspection interval.

KW - ANOVA

KW - Controlled experiments

KW - Industrial experimentation

KW - Power analysis

KW - Software inspection

UR - http://www.scopus.com/inward/record.url?scp=0031167442&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0031167442&partnerID=8YFLogxK

U2 - 10.1109/32.601071

DO - 10.1109/32.601071

M3 - Article

AN - SCOPUS:0031167442

VL - 23

SP - 329

EP - 346

JO - IEEE Transactions on Software Engineering

JF - IEEE Transactions on Software Engineering

SN - 0098-5589

IS - 6

ER -