In September, we reported on a performance pay study from Vanderbilt's National Center on Performance Incentives that found no connection between performance pay and student achievement. In our view, the result was hardly a death knell for performance pay. The experiment, which randomly assigned teachers to groups eligible for bonuses and a control group, was only designed to look at student outcomes, and not at the potential benefits of performance pay on recruitment and retention.
But was the study well designed for even its limited scope? Not necessarily, according to the Institute for Education Science's What Works Clearinghouse , which on Tuesday said the study didn't meet its evidence standards.
The Clearinghouse notes that the Vanderbilt study's authors failed to provide two key pieces of data: student sample sizes at different stages in the study and evidence that the groups of students assigned to treatment and control group teachers were equivalent. Because student assignment was controlled by the district, principals and administrators could have provided treatment group teachers better opportunities to earn bonuses by removing particularly disruptive students or assigning teachers classes thought most ripe for achievement gains. While the study argues that systematic gaming is unlikely, it does concede these weaknesses.
When analyzing a randomized control trial, attrition matters: without more data on how many students changed classes and for what reasons, movement that may have altered the performance of individual teachers. Though the Clearinghouse comes off a bit like some referee stopping a team's forward momentum over an irrelevant technical foul, Vanderbilt's study shouldn't be the last word on performance incentives.