A Missed Opportunity for Clarity

Problems in the Reporting of Effect Size Estimates in Infant Developmental Science

Laura Mills-Smith, Derek P. Spangler, Robin Panneton, Matthew S Fritz

Research output: Contribution to journalArticle

3 Citations (Scopus)

Abstract

Several years ago, the American Psychological Association began requiring that effect size estimates be reported to provide a better indication of the associative strength between factors and dependent measures in empirical studies (Publication manual of the American Psychological Association, 2010, Author, Washington, DC). Accordingly, developmental journals require/strongly recommend effect size estimates be included in published work. Potentially, this trend has important benefits for infancy research given some of the inherent difficulties in establishing conceptually strong findings when often facing highly variable performance in typically small samples. This study examined recent infant research from select journals for accuracy and interpretative value of effect size estimates. Demographics, sample size, design, and statistical data were coded from 158 published (2007-2012) articles presenting 878 effect size estimates from experimental findings with infants using behavioral methods. Descriptive and distribution statistics were calculated for the following variables: (1) statistical tests, (2) effect size parameters, and (3) effect size interpretations. Although partial eta squared (ηp2) and eta squared (η2) were most common (49 and 42%, respectively), "η confusion" was apparent, and interpretation of effect size estimates was virtually nonexistent. Thus, effect size estimates are not impacting infant development research in spite of criticisms of sole dependence on null hypothesis (e.g. American Psychologist, 49, 1994 and 997). Suggestions for increasing accuracy of effect size estimate selection and interpretative effect size estimate cutoffs are offered to improve empirical clarity.

Original languageEnglish (US)
Pages (from-to)416-432
Number of pages17
JournalInfancy
Volume20
Issue number4
DOIs
StatePublished - Jul 1 2015

Fingerprint

Research
Confusion
Child Development
Sample Size
Publications
Demography
Psychology

ASJC Scopus subject areas

  • Pediatrics, Perinatology, and Child Health
  • Developmental and Educational Psychology

Cite this

A Missed Opportunity for Clarity : Problems in the Reporting of Effect Size Estimates in Infant Developmental Science. / Mills-Smith, Laura; Spangler, Derek P.; Panneton, Robin; Fritz, Matthew S.

In: Infancy, Vol. 20, No. 4, 01.07.2015, p. 416-432.

Research output: Contribution to journalArticle

Mills-Smith, Laura ; Spangler, Derek P. ; Panneton, Robin ; Fritz, Matthew S. / A Missed Opportunity for Clarity : Problems in the Reporting of Effect Size Estimates in Infant Developmental Science. In: Infancy. 2015 ; Vol. 20, No. 4. pp. 416-432.
@article{ca75715d1cc64cb6a6fe3e88109623de,
title = "A Missed Opportunity for Clarity: Problems in the Reporting of Effect Size Estimates in Infant Developmental Science",
abstract = "Several years ago, the American Psychological Association began requiring that effect size estimates be reported to provide a better indication of the associative strength between factors and dependent measures in empirical studies (Publication manual of the American Psychological Association, 2010, Author, Washington, DC). Accordingly, developmental journals require/strongly recommend effect size estimates be included in published work. Potentially, this trend has important benefits for infancy research given some of the inherent difficulties in establishing conceptually strong findings when often facing highly variable performance in typically small samples. This study examined recent infant research from select journals for accuracy and interpretative value of effect size estimates. Demographics, sample size, design, and statistical data were coded from 158 published (2007-2012) articles presenting 878 effect size estimates from experimental findings with infants using behavioral methods. Descriptive and distribution statistics were calculated for the following variables: (1) statistical tests, (2) effect size parameters, and (3) effect size interpretations. Although partial eta squared (ηp2) and eta squared (η2) were most common (49 and 42{\%}, respectively), {"}η confusion{"} was apparent, and interpretation of effect size estimates was virtually nonexistent. Thus, effect size estimates are not impacting infant development research in spite of criticisms of sole dependence on null hypothesis (e.g. American Psychologist, 49, 1994 and 997). Suggestions for increasing accuracy of effect size estimate selection and interpretative effect size estimate cutoffs are offered to improve empirical clarity.",
author = "Laura Mills-Smith and Spangler, {Derek P.} and Robin Panneton and Fritz, {Matthew S}",
year = "2015",
month = "7",
day = "1",
doi = "10.1111/infa.12078",
language = "English (US)",
volume = "20",
pages = "416--432",
journal = "Infancy",
issn = "1525-0008",
publisher = "Wiley-Blackwell",
number = "4",

}

TY - JOUR

T1 - A Missed Opportunity for Clarity

T2 - Problems in the Reporting of Effect Size Estimates in Infant Developmental Science

AU - Mills-Smith, Laura

AU - Spangler, Derek P.

AU - Panneton, Robin

AU - Fritz, Matthew S

PY - 2015/7/1

Y1 - 2015/7/1

N2 - Several years ago, the American Psychological Association began requiring that effect size estimates be reported to provide a better indication of the associative strength between factors and dependent measures in empirical studies (Publication manual of the American Psychological Association, 2010, Author, Washington, DC). Accordingly, developmental journals require/strongly recommend effect size estimates be included in published work. Potentially, this trend has important benefits for infancy research given some of the inherent difficulties in establishing conceptually strong findings when often facing highly variable performance in typically small samples. This study examined recent infant research from select journals for accuracy and interpretative value of effect size estimates. Demographics, sample size, design, and statistical data were coded from 158 published (2007-2012) articles presenting 878 effect size estimates from experimental findings with infants using behavioral methods. Descriptive and distribution statistics were calculated for the following variables: (1) statistical tests, (2) effect size parameters, and (3) effect size interpretations. Although partial eta squared (ηp2) and eta squared (η2) were most common (49 and 42%, respectively), "η confusion" was apparent, and interpretation of effect size estimates was virtually nonexistent. Thus, effect size estimates are not impacting infant development research in spite of criticisms of sole dependence on null hypothesis (e.g. American Psychologist, 49, 1994 and 997). Suggestions for increasing accuracy of effect size estimate selection and interpretative effect size estimate cutoffs are offered to improve empirical clarity.

AB - Several years ago, the American Psychological Association began requiring that effect size estimates be reported to provide a better indication of the associative strength between factors and dependent measures in empirical studies (Publication manual of the American Psychological Association, 2010, Author, Washington, DC). Accordingly, developmental journals require/strongly recommend effect size estimates be included in published work. Potentially, this trend has important benefits for infancy research given some of the inherent difficulties in establishing conceptually strong findings when often facing highly variable performance in typically small samples. This study examined recent infant research from select journals for accuracy and interpretative value of effect size estimates. Demographics, sample size, design, and statistical data were coded from 158 published (2007-2012) articles presenting 878 effect size estimates from experimental findings with infants using behavioral methods. Descriptive and distribution statistics were calculated for the following variables: (1) statistical tests, (2) effect size parameters, and (3) effect size interpretations. Although partial eta squared (ηp2) and eta squared (η2) were most common (49 and 42%, respectively), "η confusion" was apparent, and interpretation of effect size estimates was virtually nonexistent. Thus, effect size estimates are not impacting infant development research in spite of criticisms of sole dependence on null hypothesis (e.g. American Psychologist, 49, 1994 and 997). Suggestions for increasing accuracy of effect size estimate selection and interpretative effect size estimate cutoffs are offered to improve empirical clarity.

UR - http://www.scopus.com/inward/record.url?scp=84930823440&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84930823440&partnerID=8YFLogxK

U2 - 10.1111/infa.12078

DO - 10.1111/infa.12078

M3 - Article

VL - 20

SP - 416

EP - 432

JO - Infancy

JF - Infancy

SN - 1525-0008

IS - 4

ER -