In an assessment center (AC), assessors generally rate an applicant's performance on multiple dimensions in just 1 exercise. This rating procedure introduces common rater variance within exercises but not between exercises. This article hypothesizes that this phenomenon is partly responsible for the consistently reported result that the AC lacks construct validity. Therefore, in this article, the rater effect is standardized on discriminant and convergent validity via a multitrait-multimethod design in which each matrix cell is based on ratings of different assessors. Two independent studies (N = 200, N = 52) showed that, within exercises, correlations decrease when common rater variance is excluded both across exercises (by having assessors rate only 1 exercise) and within exercises (by having assessors rate only 1 dimension per exercise). Implications are discussed in the context of the recent discussion around the appropriateness of the within-exercise versus the within-dimension evaluation method.