Stability of Teacher Value-Added Rankings Across Measurement Model and Scaling Conditions

Research output: Contribution to journalArticle

Abstract

Value-added assessment methods have been criticized by researchers and policy makers for a number of reasons. One issue includes the sensitivity of model results across different outcome measures. This study examined the utility of incorporating multivariate latent variable approaches within a traditional value-added framework. We evaluated the stability of teacher rankings across univariate and multivariate measurement model structures and scaling metric combinations using a cumulative cross-classified mixed effect model. Our results showed multivariate models were more stable across modeling conditions than univariate approaches. These findings suggest there is potential utility in incorporating multiple measures with teacher evaluation systems, yet future research will need to evaluate the degree to which models recover known population parameters via Monte Carlo simulation.

Original languageEnglish (US)
Pages (from-to)196-212
Number of pages17
JournalApplied Measurement in Education
Volume30
Issue number3
DOIs
StatePublished - Jul 3 2017

Fingerprint

Administrative Personnel
value added
scaling
ranking
Research Personnel
Outcome Assessment (Health Care)
teacher
Population
simulation
evaluation

ASJC Scopus subject areas

  • Education
  • Developmental and Educational Psychology

Cite this

Stability of Teacher Value-Added Rankings Across Measurement Model and Scaling Conditions. / Hawley, Leslie R.; Bovaird, James A.; Wu, Chao Rong.

In: Applied Measurement in Education, Vol. 30, No. 3, 03.07.2017, p. 196-212.

Research output: Contribution to journalArticle

@article{0f41461641bd456d9235683c120b3c11,
title = "Stability of Teacher Value-Added Rankings Across Measurement Model and Scaling Conditions",
abstract = "Value-added assessment methods have been criticized by researchers and policy makers for a number of reasons. One issue includes the sensitivity of model results across different outcome measures. This study examined the utility of incorporating multivariate latent variable approaches within a traditional value-added framework. We evaluated the stability of teacher rankings across univariate and multivariate measurement model structures and scaling metric combinations using a cumulative cross-classified mixed effect model. Our results showed multivariate models were more stable across modeling conditions than univariate approaches. These findings suggest there is potential utility in incorporating multiple measures with teacher evaluation systems, yet future research will need to evaluate the degree to which models recover known population parameters via Monte Carlo simulation.",
author = "Hawley, {Leslie R.} and Bovaird, {James A.} and Wu, {Chao Rong}",
year = "2017",
month = "7",
day = "3",
doi = "10.1080/08957347.2017.1316273",
language = "English (US)",
volume = "30",
pages = "196--212",
journal = "Applied Measurement in Education",
issn = "0895-7347",
publisher = "Routledge",
number = "3",

}

TY - JOUR

T1 - Stability of Teacher Value-Added Rankings Across Measurement Model and Scaling Conditions

AU - Hawley, Leslie R.

AU - Bovaird, James A.

AU - Wu, Chao Rong

PY - 2017/7/3

Y1 - 2017/7/3

N2 - Value-added assessment methods have been criticized by researchers and policy makers for a number of reasons. One issue includes the sensitivity of model results across different outcome measures. This study examined the utility of incorporating multivariate latent variable approaches within a traditional value-added framework. We evaluated the stability of teacher rankings across univariate and multivariate measurement model structures and scaling metric combinations using a cumulative cross-classified mixed effect model. Our results showed multivariate models were more stable across modeling conditions than univariate approaches. These findings suggest there is potential utility in incorporating multiple measures with teacher evaluation systems, yet future research will need to evaluate the degree to which models recover known population parameters via Monte Carlo simulation.

AB - Value-added assessment methods have been criticized by researchers and policy makers for a number of reasons. One issue includes the sensitivity of model results across different outcome measures. This study examined the utility of incorporating multivariate latent variable approaches within a traditional value-added framework. We evaluated the stability of teacher rankings across univariate and multivariate measurement model structures and scaling metric combinations using a cumulative cross-classified mixed effect model. Our results showed multivariate models were more stable across modeling conditions than univariate approaches. These findings suggest there is potential utility in incorporating multiple measures with teacher evaluation systems, yet future research will need to evaluate the degree to which models recover known population parameters via Monte Carlo simulation.

UR - http://www.scopus.com/inward/record.url?scp=85020665031&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85020665031&partnerID=8YFLogxK

U2 - 10.1080/08957347.2017.1316273

DO - 10.1080/08957347.2017.1316273

M3 - Article

AN - SCOPUS:85020665031

VL - 30

SP - 196

EP - 212

JO - Applied Measurement in Education

JF - Applied Measurement in Education

SN - 0895-7347

IS - 3

ER -