Participants shift response deadlines based on list difficulty during reading-aloud megastudies

Michael J. Cortese, Maya M. Khanna, Robert Kopp, Jonathan B. Santo, Kailey S. Preston, Tyler Van Zuiden

Research output: Contribution to journalArticle

2 Citations (Scopus)

Abstract

We tested the list homogeneity effect in reading aloud (e.g., Lupker, Brown, & Colombo, 1997) using a megastudy paradigm. In each of two conditions, we used 25 blocks of 100 trials. In the random condition, words were selected randomly for each block, whereas in the experimental condition, words were blocked by difficulty (e.g., easy words together, etc.), but the order of the blocks was randomized. We predicted that standard factors (e.g., frequency) would be more predictive of reaction times (RTs) in the blocked than in the random condition, because the range of RTs across the experiment would increase in the blocked condition. Indeed, we found that the standard deviations and ranges of RTs were larger in the blocked than in the random condition. In addition, an examination of items at the difficulty extremes (i.e., very easy vs. very difficult) demonstrated a response bias. In regression analyses, a predictor set of seven sublexical, lexical, and semantic variables accounted for 2.8% more RT variance (and 2.6% more zRT variance) in the blocked than in the random condition. These results indicate that response deadlines apply to megastudies of reading aloud, and that the influence of predictors may be underestimated in megastudies when item presentation is randomized. In addition, the CDP++ model accounted for 0.8% more variance in RTs (1.2% in zRTs) in the blocked than in the random condition. Thus, computational models may have more predictive power on item sets blocked by difficulty than on those presented in random order. The results also indicate that models of word processing need to accommodate response criterion shifts.

Original languageEnglish (US)
Pages (from-to)589-599
Number of pages11
JournalMemory and Cognition
Volume45
Issue number4
DOIs
StatePublished - Feb 16 2017

Fingerprint

Reading
Cytidine Diphosphate
Word Processing
Semantics
Regression Analysis
Reading Aloud
Reaction Time
Predictors

Keywords

  • Megastudy
  • Reading aloud
  • Response deadline

ASJC Scopus subject areas

  • Neuropsychology and Physiological Psychology
  • Experimental and Cognitive Psychology
  • Arts and Humanities (miscellaneous)

Cite this

Participants shift response deadlines based on list difficulty during reading-aloud megastudies. / Cortese, Michael J.; Khanna, Maya M.; Kopp, Robert; Santo, Jonathan B.; Preston, Kailey S.; Van Zuiden, Tyler.

In: Memory and Cognition, Vol. 45, No. 4, 16.02.2017, p. 589-599.

Research output: Contribution to journalArticle

Cortese, Michael J. ; Khanna, Maya M. ; Kopp, Robert ; Santo, Jonathan B. ; Preston, Kailey S. ; Van Zuiden, Tyler. / Participants shift response deadlines based on list difficulty during reading-aloud megastudies. In: Memory and Cognition. 2017 ; Vol. 45, No. 4. pp. 589-599.
@article{afd5e85a9d184a148186881f21326b65,
title = "Participants shift response deadlines based on list difficulty during reading-aloud megastudies",
abstract = "We tested the list homogeneity effect in reading aloud (e.g., Lupker, Brown, & Colombo, 1997) using a megastudy paradigm. In each of two conditions, we used 25 blocks of 100 trials. In the random condition, words were selected randomly for each block, whereas in the experimental condition, words were blocked by difficulty (e.g., easy words together, etc.), but the order of the blocks was randomized. We predicted that standard factors (e.g., frequency) would be more predictive of reaction times (RTs) in the blocked than in the random condition, because the range of RTs across the experiment would increase in the blocked condition. Indeed, we found that the standard deviations and ranges of RTs were larger in the blocked than in the random condition. In addition, an examination of items at the difficulty extremes (i.e., very easy vs. very difficult) demonstrated a response bias. In regression analyses, a predictor set of seven sublexical, lexical, and semantic variables accounted for 2.8{\%} more RT variance (and 2.6{\%} more zRT variance) in the blocked than in the random condition. These results indicate that response deadlines apply to megastudies of reading aloud, and that the influence of predictors may be underestimated in megastudies when item presentation is randomized. In addition, the CDP++ model accounted for 0.8{\%} more variance in RTs (1.2{\%} in zRTs) in the blocked than in the random condition. Thus, computational models may have more predictive power on item sets blocked by difficulty than on those presented in random order. The results also indicate that models of word processing need to accommodate response criterion shifts.",
keywords = "Megastudy, Reading aloud, Response deadline",
author = "Cortese, {Michael J.} and Khanna, {Maya M.} and Robert Kopp and Santo, {Jonathan B.} and Preston, {Kailey S.} and {Van Zuiden}, Tyler",
year = "2017",
month = "2",
day = "16",
doi = "10.3758/s13421-016-0678-8",
language = "English (US)",
volume = "45",
pages = "589--599",
journal = "Memory and Cognition",
issn = "0090-502X",
publisher = "Springer New York",
number = "4",

}

TY - JOUR

T1 - Participants shift response deadlines based on list difficulty during reading-aloud megastudies

AU - Cortese, Michael J.

AU - Khanna, Maya M.

AU - Kopp, Robert

AU - Santo, Jonathan B.

AU - Preston, Kailey S.

AU - Van Zuiden, Tyler

PY - 2017/2/16

Y1 - 2017/2/16

N2 - We tested the list homogeneity effect in reading aloud (e.g., Lupker, Brown, & Colombo, 1997) using a megastudy paradigm. In each of two conditions, we used 25 blocks of 100 trials. In the random condition, words were selected randomly for each block, whereas in the experimental condition, words were blocked by difficulty (e.g., easy words together, etc.), but the order of the blocks was randomized. We predicted that standard factors (e.g., frequency) would be more predictive of reaction times (RTs) in the blocked than in the random condition, because the range of RTs across the experiment would increase in the blocked condition. Indeed, we found that the standard deviations and ranges of RTs were larger in the blocked than in the random condition. In addition, an examination of items at the difficulty extremes (i.e., very easy vs. very difficult) demonstrated a response bias. In regression analyses, a predictor set of seven sublexical, lexical, and semantic variables accounted for 2.8% more RT variance (and 2.6% more zRT variance) in the blocked than in the random condition. These results indicate that response deadlines apply to megastudies of reading aloud, and that the influence of predictors may be underestimated in megastudies when item presentation is randomized. In addition, the CDP++ model accounted for 0.8% more variance in RTs (1.2% in zRTs) in the blocked than in the random condition. Thus, computational models may have more predictive power on item sets blocked by difficulty than on those presented in random order. The results also indicate that models of word processing need to accommodate response criterion shifts.

AB - We tested the list homogeneity effect in reading aloud (e.g., Lupker, Brown, & Colombo, 1997) using a megastudy paradigm. In each of two conditions, we used 25 blocks of 100 trials. In the random condition, words were selected randomly for each block, whereas in the experimental condition, words were blocked by difficulty (e.g., easy words together, etc.), but the order of the blocks was randomized. We predicted that standard factors (e.g., frequency) would be more predictive of reaction times (RTs) in the blocked than in the random condition, because the range of RTs across the experiment would increase in the blocked condition. Indeed, we found that the standard deviations and ranges of RTs were larger in the blocked than in the random condition. In addition, an examination of items at the difficulty extremes (i.e., very easy vs. very difficult) demonstrated a response bias. In regression analyses, a predictor set of seven sublexical, lexical, and semantic variables accounted for 2.8% more RT variance (and 2.6% more zRT variance) in the blocked than in the random condition. These results indicate that response deadlines apply to megastudies of reading aloud, and that the influence of predictors may be underestimated in megastudies when item presentation is randomized. In addition, the CDP++ model accounted for 0.8% more variance in RTs (1.2% in zRTs) in the blocked than in the random condition. Thus, computational models may have more predictive power on item sets blocked by difficulty than on those presented in random order. The results also indicate that models of word processing need to accommodate response criterion shifts.

KW - Megastudy

KW - Reading aloud

KW - Response deadline

UR - http://www.scopus.com/inward/record.url?scp=85013067773&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85013067773&partnerID=8YFLogxK

U2 - 10.3758/s13421-016-0678-8

DO - 10.3758/s13421-016-0678-8

M3 - Article

C2 - 28211025

AN - SCOPUS:85013067773

VL - 45

SP - 589

EP - 599

JO - Memory and Cognition

JF - Memory and Cognition

SN - 0090-502X

IS - 4

ER -