Value-added Teacher Estimates as Part of Teacher Evaluations: Exploring the Effects of Data and Model Specifications on the Stability of Teacher Value-added Scores

Nicole B Kersting, Mei-Kuang Chen, James W. Stigler


In this study we explored the effects of statistical controls, single versus multiple cohort models, and student sample size on the stability of teacher value-added estimates (VAEs). We estimated VAEs for all 5th grade mathematics teachers in a large urban district by fitting two level mixed models using four cohorts of student data. We found that student sample size had the largest effect on changes in teachers’ relative standing and designation into performance groups, while control variables affected VAEs only minimally. However, we also found that teacher VAEs showed a fair degree of stability; year-to-year correlations ranged between .62 and .66, and changes in teacher effectiveness systematically varied by teacher experience, with beginning teachers showing the largest improvements over the four years under study.  Our results suggest that some model specifications are likely to produce teacher value-added scores that can reflect meaningful differences in teachers while we also found that other models might produce VAEs that might be unreliable. 




Value-added Analysis, Value-added Models; Value-added Estimates; Stability; Teacher Evaluations; Accountability.

Full Text:



Article Metrics

Metrics Loading ...

Metrics powered by PLOS ALM

Copyright (c) 2019 Nicole B Kersting, Mei-Kuang Chen, James W. Stigler


Contact EPAA//AAPE at Mary Lou Fulton Teachers College