Experiment to assess cost-benefits of inspection meetings and their alternatives: a pilot study

Patricia McCarthy, Adam Porter, Harvey Siy, Lawrence G. Votta

Research output: Contribution to conferencePaper

11 Citations (Scopus)

Abstract

We hypothesize that inspection meetings are far less effective than many people believe and that meetingless inspections are equally effective. However, two of our previous industrial case studies contradict each other on this issue. Therefore, we are conducting a multi-trial, controlled experiment to assess the benefits of inspection meetings and to evaluate alternative procedures. The experiment manipulates four independent variables: (1) the inspection method used (two methods involve meetings, one method does not), (2) the requirements specification to be inspected (there are two), (3) the inspection round (each team participates in two inspections), and (4) the presentation order (either specification can be inspected first). For each experiment we measure 3 dependent variables: (1) the individual fault detection rate, (2) the team fault detection rate, and (3) the percentage of faults originally discovered after the initial inspection phase (during which phase reviewers individually analyze the document). So far we have completed one run of the experiment with 21 graduate students in the computer science at the University of Maryland as subjects, but we do not yet have enough data points to draw definite conclusions. Rather than presenting preliminary conclusions, this article (1) describes the experiment's design and the provocative hypotheses we are evaluating, (2) summarizes our observations from the experiment's initial run, and (3) discusses how we are using these observations to verify our data collection instruments and to refine future experimental runs.

Original languageEnglish (US)
Pages100-111
Number of pages12
StatePublished - Jan 1 1996
Externally publishedYes
EventProceedings of the 1996 3rd International Software Metrics Symposium - Berlin, Ger
Duration: Mar 25 1996Mar 26 1996

Other

OtherProceedings of the 1996 3rd International Software Metrics Symposium
CityBerlin, Ger
Period3/25/963/26/96

Fingerprint

Inspection
Costs
Experiments
Fault detection
Specifications
Computer science
Students

ASJC Scopus subject areas

  • Computer Science(all)

Cite this

McCarthy, P., Porter, A., Siy, H., & Votta, L. G. (1996). Experiment to assess cost-benefits of inspection meetings and their alternatives: a pilot study. 100-111. Paper presented at Proceedings of the 1996 3rd International Software Metrics Symposium, Berlin, Ger, .

Experiment to assess cost-benefits of inspection meetings and their alternatives : a pilot study. / McCarthy, Patricia; Porter, Adam; Siy, Harvey; Votta, Lawrence G.

1996. 100-111 Paper presented at Proceedings of the 1996 3rd International Software Metrics Symposium, Berlin, Ger, .

Research output: Contribution to conferencePaper

McCarthy, P, Porter, A, Siy, H & Votta, LG 1996, 'Experiment to assess cost-benefits of inspection meetings and their alternatives: a pilot study' Paper presented at Proceedings of the 1996 3rd International Software Metrics Symposium, Berlin, Ger, 3/25/96 - 3/26/96, pp. 100-111.
McCarthy P, Porter A, Siy H, Votta LG. Experiment to assess cost-benefits of inspection meetings and their alternatives: a pilot study. 1996. Paper presented at Proceedings of the 1996 3rd International Software Metrics Symposium, Berlin, Ger, .
McCarthy, Patricia ; Porter, Adam ; Siy, Harvey ; Votta, Lawrence G. / Experiment to assess cost-benefits of inspection meetings and their alternatives : a pilot study. Paper presented at Proceedings of the 1996 3rd International Software Metrics Symposium, Berlin, Ger, .12 p.
@conference{f879297a6efd4f3db5cabeddfaa72dfc,
title = "Experiment to assess cost-benefits of inspection meetings and their alternatives: a pilot study",
abstract = "We hypothesize that inspection meetings are far less effective than many people believe and that meetingless inspections are equally effective. However, two of our previous industrial case studies contradict each other on this issue. Therefore, we are conducting a multi-trial, controlled experiment to assess the benefits of inspection meetings and to evaluate alternative procedures. The experiment manipulates four independent variables: (1) the inspection method used (two methods involve meetings, one method does not), (2) the requirements specification to be inspected (there are two), (3) the inspection round (each team participates in two inspections), and (4) the presentation order (either specification can be inspected first). For each experiment we measure 3 dependent variables: (1) the individual fault detection rate, (2) the team fault detection rate, and (3) the percentage of faults originally discovered after the initial inspection phase (during which phase reviewers individually analyze the document). So far we have completed one run of the experiment with 21 graduate students in the computer science at the University of Maryland as subjects, but we do not yet have enough data points to draw definite conclusions. Rather than presenting preliminary conclusions, this article (1) describes the experiment's design and the provocative hypotheses we are evaluating, (2) summarizes our observations from the experiment's initial run, and (3) discusses how we are using these observations to verify our data collection instruments and to refine future experimental runs.",
author = "Patricia McCarthy and Adam Porter and Harvey Siy and Votta, {Lawrence G.}",
year = "1996",
month = "1",
day = "1",
language = "English (US)",
pages = "100--111",
note = "Proceedings of the 1996 3rd International Software Metrics Symposium ; Conference date: 25-03-1996 Through 26-03-1996",

}

TY - CONF

T1 - Experiment to assess cost-benefits of inspection meetings and their alternatives

T2 - a pilot study

AU - McCarthy, Patricia

AU - Porter, Adam

AU - Siy, Harvey

AU - Votta, Lawrence G.

PY - 1996/1/1

Y1 - 1996/1/1

N2 - We hypothesize that inspection meetings are far less effective than many people believe and that meetingless inspections are equally effective. However, two of our previous industrial case studies contradict each other on this issue. Therefore, we are conducting a multi-trial, controlled experiment to assess the benefits of inspection meetings and to evaluate alternative procedures. The experiment manipulates four independent variables: (1) the inspection method used (two methods involve meetings, one method does not), (2) the requirements specification to be inspected (there are two), (3) the inspection round (each team participates in two inspections), and (4) the presentation order (either specification can be inspected first). For each experiment we measure 3 dependent variables: (1) the individual fault detection rate, (2) the team fault detection rate, and (3) the percentage of faults originally discovered after the initial inspection phase (during which phase reviewers individually analyze the document). So far we have completed one run of the experiment with 21 graduate students in the computer science at the University of Maryland as subjects, but we do not yet have enough data points to draw definite conclusions. Rather than presenting preliminary conclusions, this article (1) describes the experiment's design and the provocative hypotheses we are evaluating, (2) summarizes our observations from the experiment's initial run, and (3) discusses how we are using these observations to verify our data collection instruments and to refine future experimental runs.

AB - We hypothesize that inspection meetings are far less effective than many people believe and that meetingless inspections are equally effective. However, two of our previous industrial case studies contradict each other on this issue. Therefore, we are conducting a multi-trial, controlled experiment to assess the benefits of inspection meetings and to evaluate alternative procedures. The experiment manipulates four independent variables: (1) the inspection method used (two methods involve meetings, one method does not), (2) the requirements specification to be inspected (there are two), (3) the inspection round (each team participates in two inspections), and (4) the presentation order (either specification can be inspected first). For each experiment we measure 3 dependent variables: (1) the individual fault detection rate, (2) the team fault detection rate, and (3) the percentage of faults originally discovered after the initial inspection phase (during which phase reviewers individually analyze the document). So far we have completed one run of the experiment with 21 graduate students in the computer science at the University of Maryland as subjects, but we do not yet have enough data points to draw definite conclusions. Rather than presenting preliminary conclusions, this article (1) describes the experiment's design and the provocative hypotheses we are evaluating, (2) summarizes our observations from the experiment's initial run, and (3) discusses how we are using these observations to verify our data collection instruments and to refine future experimental runs.

UR - http://www.scopus.com/inward/record.url?scp=0029709620&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0029709620&partnerID=8YFLogxK

M3 - Paper

AN - SCOPUS:0029709620

SP - 100

EP - 111

ER -