Impact of common rater variance on construct validity of assessment center dimension judgments

Nanja J. Kolk*, Marise Ph Born, Henk Van der Flier

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

16 Citations (Scopus)

Abstract

In an assessment center (AC), assessors generally rate an applicant's performance on multiple dimensions in just 1 exercise. This rating procedure introduces common rater variance within exercises but not between exercises. This article hypothesizes that this phenomenon is partly responsible for the consistently reported result that the AC lacks construct validity. Therefore, in this article, the rater effect is standardized on discriminant and convergent validity via a multitrait-multimethod design in which each matrix cell is based on ratings of different assessors. Two independent studies (N = 200, N = 52) showed that, within exercises, correlations decrease when common rater variance is excluded both across exercises (by having assessors rate only 1 exercise) and within exercises (by having assessors rate only 1 dimension per exercise). Implications are discussed in the context of the recent discussion around the appropriateness of the within-exercise versus the within-dimension evaluation method.

Original languageEnglish
Pages (from-to)325-337
Number of pages13
JournalHuman Performance
Volume15
Issue number4
DOIs
Publication statusPublished - 2002

Fingerprint

Dive into the research topics of 'Impact of common rater variance on construct validity of assessment center dimension judgments'. Together they form a unique fingerprint.

Cite this