Discourse analysis procedures: Reliability issues

Hux Karen, Dixie Sanger, Robert Reid, Amy Maschka

Research output: Contribution to journalArticle

13 Citations (Scopus)

Abstract

Performing discourse analyses to supplement assessment procedures and facilitate intervention planning is only valuable if the observations are reliable. The purpose of the present study was to evaluate and compare four methods of assessing reliability on one discourse analysis procedure-a modified version of Damico's Clinical Discourse Analysis (1985a, 1985b, 1992). The selected methods were: (a) Pearson product-moment correlations, (b) interobserver agreement, (c) Cohen's kappa, and (d) generalizability coefficients. Results showed high correlation coefficients and high percentages of interobserver agreement when error type was not taken into account. However, interobsever agreement percentages obtained solely for target behavior occurrences and Cohen's kappa revealed that much of the agreement between raters was due to chance and the high frequency of target behavior non-occurrence. Generalizability coefficients revealed that the procedure was fair to good for discriminating among persons with differing levels of language competency for some aspects of communication performance but was less than desirable for others; the aggregate score was below recommended standards for differentiating among people for diagnostic purposes.

Original languageEnglish (US)
Pages (from-to)133-150
Number of pages18
JournalJournal of Communication Disorders
Volume30
Issue number2
DOIs
StatePublished - Jan 1 1997

Fingerprint

analysis procedure
discourse analysis
supplement
diagnostic
planning
human being
discourse
communication
language
performance
Language
Communication

ASJC Scopus subject areas

  • Experimental and Cognitive Psychology
  • Linguistics and Language
  • Cognitive Neuroscience
  • LPN and LVN
  • Speech and Hearing

Cite this

Discourse analysis procedures : Reliability issues. / Karen, Hux; Sanger, Dixie; Reid, Robert; Maschka, Amy.

In: Journal of Communication Disorders, Vol. 30, No. 2, 01.01.1997, p. 133-150.

Research output: Contribution to journalArticle

Karen, Hux ; Sanger, Dixie ; Reid, Robert ; Maschka, Amy. / Discourse analysis procedures : Reliability issues. In: Journal of Communication Disorders. 1997 ; Vol. 30, No. 2. pp. 133-150.
@article{0db7fd1c909f440e8a384022ccd4978a,
title = "Discourse analysis procedures: Reliability issues",
abstract = "Performing discourse analyses to supplement assessment procedures and facilitate intervention planning is only valuable if the observations are reliable. The purpose of the present study was to evaluate and compare four methods of assessing reliability on one discourse analysis procedure-a modified version of Damico's Clinical Discourse Analysis (1985a, 1985b, 1992). The selected methods were: (a) Pearson product-moment correlations, (b) interobserver agreement, (c) Cohen's kappa, and (d) generalizability coefficients. Results showed high correlation coefficients and high percentages of interobserver agreement when error type was not taken into account. However, interobsever agreement percentages obtained solely for target behavior occurrences and Cohen's kappa revealed that much of the agreement between raters was due to chance and the high frequency of target behavior non-occurrence. Generalizability coefficients revealed that the procedure was fair to good for discriminating among persons with differing levels of language competency for some aspects of communication performance but was less than desirable for others; the aggregate score was below recommended standards for differentiating among people for diagnostic purposes.",
author = "Hux Karen and Dixie Sanger and Robert Reid and Amy Maschka",
year = "1997",
month = "1",
day = "1",
doi = "10.1016/S0021-9924(96)00060-3",
language = "English (US)",
volume = "30",
pages = "133--150",
journal = "Journal of Communication Disorders",
issn = "0021-9924",
publisher = "Elsevier Inc.",
number = "2",

}

TY - JOUR

T1 - Discourse analysis procedures

T2 - Reliability issues

AU - Karen, Hux

AU - Sanger, Dixie

AU - Reid, Robert

AU - Maschka, Amy

PY - 1997/1/1

Y1 - 1997/1/1

N2 - Performing discourse analyses to supplement assessment procedures and facilitate intervention planning is only valuable if the observations are reliable. The purpose of the present study was to evaluate and compare four methods of assessing reliability on one discourse analysis procedure-a modified version of Damico's Clinical Discourse Analysis (1985a, 1985b, 1992). The selected methods were: (a) Pearson product-moment correlations, (b) interobserver agreement, (c) Cohen's kappa, and (d) generalizability coefficients. Results showed high correlation coefficients and high percentages of interobserver agreement when error type was not taken into account. However, interobsever agreement percentages obtained solely for target behavior occurrences and Cohen's kappa revealed that much of the agreement between raters was due to chance and the high frequency of target behavior non-occurrence. Generalizability coefficients revealed that the procedure was fair to good for discriminating among persons with differing levels of language competency for some aspects of communication performance but was less than desirable for others; the aggregate score was below recommended standards for differentiating among people for diagnostic purposes.

AB - Performing discourse analyses to supplement assessment procedures and facilitate intervention planning is only valuable if the observations are reliable. The purpose of the present study was to evaluate and compare four methods of assessing reliability on one discourse analysis procedure-a modified version of Damico's Clinical Discourse Analysis (1985a, 1985b, 1992). The selected methods were: (a) Pearson product-moment correlations, (b) interobserver agreement, (c) Cohen's kappa, and (d) generalizability coefficients. Results showed high correlation coefficients and high percentages of interobserver agreement when error type was not taken into account. However, interobsever agreement percentages obtained solely for target behavior occurrences and Cohen's kappa revealed that much of the agreement between raters was due to chance and the high frequency of target behavior non-occurrence. Generalizability coefficients revealed that the procedure was fair to good for discriminating among persons with differing levels of language competency for some aspects of communication performance but was less than desirable for others; the aggregate score was below recommended standards for differentiating among people for diagnostic purposes.

UR - http://www.scopus.com/inward/record.url?scp=0031094021&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0031094021&partnerID=8YFLogxK

U2 - 10.1016/S0021-9924(96)00060-3

DO - 10.1016/S0021-9924(96)00060-3

M3 - Article

C2 - 9100128

AN - SCOPUS:0031094021

VL - 30

SP - 133

EP - 150

JO - Journal of Communication Disorders

JF - Journal of Communication Disorders

SN - 0021-9924

IS - 2

ER -