Adding the voom to VAM: new research on value-added

See all posts

As attractive an idea as value-added models (VAMs) may be, statisticians really do need to work out the bugs before they are used to judge teacher performance. Thomas Kane and Douglas Staiger have just unraveled one potential kink with a new study involving an intriguing experiment with 78 pairs of elementary school classrooms, all taught by National Board certified teachers.

One of the big worries researchers have always had about any attempt to measure teacher effectiveness, including the use of value-added methodology, is that they are not able to take into account those factors that result in teachers not being randomly assigned to classrooms: lobbying on the part of parents, jockeying on the part of teachers, and favoritism on the part of principals. As a result, some teachers may get higher VAM scores than reflect their actual effectiveness and some teachers may get lower.

Kane and Staiger addressed the classic "randomization" problem by persuading a number of Los Angeles principals to allow the researchers to randomly assign classrooms to 78 pairs of National Board teachers for whom the district already had several years worth of student test scores. Using these scores from previous years, they developed a VAM model of the teachers' effectiveness and then checked whether the teachers achieved roughly the same VAM score at the end of the experiment with random assignment. In other words, they tested whether we should trust VAM scores produced when teachers aren't randomly assigned, which is the only practical scenario under which such scores can ever be used.

The bottom line? The scores correlated. Teachers found to be more effective in non-random classrooms are equally effective in the random assignment, meaning districts can trust the scores they're now getting.