Visual aids and structured criteria for improving visual inspection and interpretation of single-case designs

Wayne W. Fisher, Michael E. Kelley, Joanna E. Lomas

Research output: Contribution to journalArticle

132 Citations (Scopus)

Abstract

Because behavior analysis is a data-driven process, a critical skill for behavior analysts is accurate visual inspection and interpretation of single-case data. Study 1 was a basic study in which we increased the accuracy of visual inspection methods for A-B designs through two refinements of the split-middle (SM) method called the dual-criteria (DC) and conservative dual-criteria (CDC) methods. The accuracy of these visual inspection methods was compared with one another and with two statistical methods (Allison & Gorman, 1993; Gottman, 1981) using a computer-simulated Monte Carlo study. Results indicated that the DC and CDC methods controlled Type I error rates much better than the SM method and had considerably higher power (to detect real treatment effects) than the two statistical methods. In Study 2, brief verbal and written instructions with modeling were used to train 5 staff members to use the DC method, and in Study 3, these training methods were incorporated into a slide presentation and were used to rapidly (i.e., 15 min) train a large group of individuals (N = 87). Interpretation accuracy increased from a baseline mean of 55% to a treatment mean of 94% in Study 2 and from a baseline mean of 71% to a treatment mean of 95% in Study 3. Thus, Study 1 answered basic questions about the accuracy of several methods of interpreting A-B designs; Study 2 showed how that information could be used to increase the accuracy of human visual inspectors; and Study 3 showed how the training procedures from Study 2 could be modified into a format that would facilitate rapid training of large groups of individuals to interpret single-case designs.

Original languageEnglish (US)
Pages (from-to)387-406
Number of pages20
JournalJournal of applied behavior analysis
Volume36
Issue number3
DOIs
StatePublished - Jan 1 2003

Fingerprint

Audiovisual Aids
interpretation
statistical method
basic studies
behavior analysis
training method
Visual Aids
Group
instruction
staff

Keywords

  • Assessment
  • Behavior analysis
  • Data analysis
  • Interrater agreement
  • Visual inspection

ASJC Scopus subject areas

  • Applied Psychology
  • Sociology and Political Science
  • Philosophy

Cite this

Visual aids and structured criteria for improving visual inspection and interpretation of single-case designs. / Fisher, Wayne W.; Kelley, Michael E.; Lomas, Joanna E.

In: Journal of applied behavior analysis, Vol. 36, No. 3, 01.01.2003, p. 387-406.

Research output: Contribution to journalArticle

@article{e2d1c0eb924c439aa449ba02569f90fe,
title = "Visual aids and structured criteria for improving visual inspection and interpretation of single-case designs",
abstract = "Because behavior analysis is a data-driven process, a critical skill for behavior analysts is accurate visual inspection and interpretation of single-case data. Study 1 was a basic study in which we increased the accuracy of visual inspection methods for A-B designs through two refinements of the split-middle (SM) method called the dual-criteria (DC) and conservative dual-criteria (CDC) methods. The accuracy of these visual inspection methods was compared with one another and with two statistical methods (Allison & Gorman, 1993; Gottman, 1981) using a computer-simulated Monte Carlo study. Results indicated that the DC and CDC methods controlled Type I error rates much better than the SM method and had considerably higher power (to detect real treatment effects) than the two statistical methods. In Study 2, brief verbal and written instructions with modeling were used to train 5 staff members to use the DC method, and in Study 3, these training methods were incorporated into a slide presentation and were used to rapidly (i.e., 15 min) train a large group of individuals (N = 87). Interpretation accuracy increased from a baseline mean of 55{\%} to a treatment mean of 94{\%} in Study 2 and from a baseline mean of 71{\%} to a treatment mean of 95{\%} in Study 3. Thus, Study 1 answered basic questions about the accuracy of several methods of interpreting A-B designs; Study 2 showed how that information could be used to increase the accuracy of human visual inspectors; and Study 3 showed how the training procedures from Study 2 could be modified into a format that would facilitate rapid training of large groups of individuals to interpret single-case designs.",
keywords = "Assessment, Behavior analysis, Data analysis, Interrater agreement, Visual inspection",
author = "Fisher, {Wayne W.} and Kelley, {Michael E.} and Lomas, {Joanna E.}",
year = "2003",
month = "1",
day = "1",
doi = "10.1901/jaba.2003.36-387",
language = "English (US)",
volume = "36",
pages = "387--406",
journal = "Journal of Applied Behavior Analysis",
issn = "0021-8855",
publisher = "Society for the Experimental Analysis of Behavior Inc.",
number = "3",

}

TY - JOUR

T1 - Visual aids and structured criteria for improving visual inspection and interpretation of single-case designs

AU - Fisher, Wayne W.

AU - Kelley, Michael E.

AU - Lomas, Joanna E.

PY - 2003/1/1

Y1 - 2003/1/1

N2 - Because behavior analysis is a data-driven process, a critical skill for behavior analysts is accurate visual inspection and interpretation of single-case data. Study 1 was a basic study in which we increased the accuracy of visual inspection methods for A-B designs through two refinements of the split-middle (SM) method called the dual-criteria (DC) and conservative dual-criteria (CDC) methods. The accuracy of these visual inspection methods was compared with one another and with two statistical methods (Allison & Gorman, 1993; Gottman, 1981) using a computer-simulated Monte Carlo study. Results indicated that the DC and CDC methods controlled Type I error rates much better than the SM method and had considerably higher power (to detect real treatment effects) than the two statistical methods. In Study 2, brief verbal and written instructions with modeling were used to train 5 staff members to use the DC method, and in Study 3, these training methods were incorporated into a slide presentation and were used to rapidly (i.e., 15 min) train a large group of individuals (N = 87). Interpretation accuracy increased from a baseline mean of 55% to a treatment mean of 94% in Study 2 and from a baseline mean of 71% to a treatment mean of 95% in Study 3. Thus, Study 1 answered basic questions about the accuracy of several methods of interpreting A-B designs; Study 2 showed how that information could be used to increase the accuracy of human visual inspectors; and Study 3 showed how the training procedures from Study 2 could be modified into a format that would facilitate rapid training of large groups of individuals to interpret single-case designs.

AB - Because behavior analysis is a data-driven process, a critical skill for behavior analysts is accurate visual inspection and interpretation of single-case data. Study 1 was a basic study in which we increased the accuracy of visual inspection methods for A-B designs through two refinements of the split-middle (SM) method called the dual-criteria (DC) and conservative dual-criteria (CDC) methods. The accuracy of these visual inspection methods was compared with one another and with two statistical methods (Allison & Gorman, 1993; Gottman, 1981) using a computer-simulated Monte Carlo study. Results indicated that the DC and CDC methods controlled Type I error rates much better than the SM method and had considerably higher power (to detect real treatment effects) than the two statistical methods. In Study 2, brief verbal and written instructions with modeling were used to train 5 staff members to use the DC method, and in Study 3, these training methods were incorporated into a slide presentation and were used to rapidly (i.e., 15 min) train a large group of individuals (N = 87). Interpretation accuracy increased from a baseline mean of 55% to a treatment mean of 94% in Study 2 and from a baseline mean of 71% to a treatment mean of 95% in Study 3. Thus, Study 1 answered basic questions about the accuracy of several methods of interpreting A-B designs; Study 2 showed how that information could be used to increase the accuracy of human visual inspectors; and Study 3 showed how the training procedures from Study 2 could be modified into a format that would facilitate rapid training of large groups of individuals to interpret single-case designs.

KW - Assessment

KW - Behavior analysis

KW - Data analysis

KW - Interrater agreement

KW - Visual inspection

UR - http://www.scopus.com/inward/record.url?scp=2142699505&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=2142699505&partnerID=8YFLogxK

U2 - 10.1901/jaba.2003.36-387

DO - 10.1901/jaba.2003.36-387

M3 - Article

C2 - 14596583

AN - SCOPUS:2142699505

VL - 36

SP - 387

EP - 406

JO - Journal of Applied Behavior Analysis

JF - Journal of Applied Behavior Analysis

SN - 0021-8855

IS - 3

ER -