An examination of questionnaire evaluation by expert reviewers

Research output: Contribution to journalArticle

46 Scopus citations

Abstract

Expert reviews are frequently used as a questionnaire evaluation method but have received little empirical attention. Questions from two surveys are evaluated by six expert reviewers using a standardized evaluation form. Each of the questions has validation data available from records. Large inconsistencies in ratings across the six experts are found. Despite the lack of reliability, the average expert ratings successfully identify questions that had higher item nonresponse rates and higher levels of inaccurate reporting. This article provides empirical evidence that experts are able to discern questions that manifest data quality problems, even if individual experts vary in what they rate as being problematic. Compared to a publicly available computerized question evaluation tool, ratings by the human experts positively predict questions with data quality problems, whereas the computerized tool varies in success in identifying these questions. These results indicate that expert reviews have value in identifying question problems that result in lower survey data quality.

Original languageEnglish (US)
Pages (from-to)295-318
Number of pages24
JournalField Methods
Volume22
Issue number4
DOIs
StatePublished - Nov 1 2010

    Fingerprint

Keywords

  • expert reviewers
  • measurement error
  • pretesting
  • questionnaire design

ASJC Scopus subject areas

  • Anthropology

Cite this