National Board scored

See all posts

Under some considerable pressure, the National Board for Professional Teaching Standards finally ended up releasing an unflattering study by Tennessee value-added guru Bill Sanders (with colleagues James J. Ashton and S. Paul Wright). The organization dedicated to identifying America's Top Teachers had clumsily and unsuccessfully tried to quash the negative study for fear of bad press--which of course ended up being worse for all the secrecy. While there's no question that the study's results were not good news for the popular program, less attention has been focused on two key questions: 1) whether the study was any good; 2) if it really does contradict previous, seemingly more favorable, studies.

On style, Sanders does seem to relish delivering the bad news. He doesn't seem too concerned with making the usual qualifications, conditions, and caveats cherished by most academics, stating flatly and repeatedly that NB teachers are no better and no worse than all other teachers. Case closed. However, the strength of his conviction in the face of his actual data should cause some degree of uneasiness. For example, why doesn't he point out that NB teachers were almost always better than non-NB teachers on 27 out of 30 measures, even if the individual results weren't significant? The accumulated consistency of these positive findings suggests something besides mere random chance. (Mathematica researchers made the same questionable call in their study of TFA teachers, who were consistently more effective than their peers across the board, though not always significantly.)

Also on style, Sanders seems to enjoy trashing previous studies for reasons that aren't entirely defensible. Sanders' model is based on HLM ("hedonic linear modeling"), while Dan Goldhaber's study of NB teachers in North Carolina--a study that Sanders dismisses--used multiple regression analysis. If you buy Sanders' line, Goldhaber's use of regression analysis was akin to running statistics based on their alignment with the stars. In fact, Sanders' HLM model may have its own unacknowledged drawbacks. HLM worked well for Sanders in Tennessee, where he had years of student data to compensate for the fact that the model doesn't accommodate student background, but it's not as clear that Sanders enjoys the same wealth of data in this study.

And that takes us to the next point. On content, the 9-page paper (minus lots of tables) is decidedly breezy in tone. Important details are left out. While there may be no reason to suspect that Sanders' findings aren't valid, it is troubling to find so many blanks, especially in a field that usually falls all over itself with transparency.

Should the National Board go back to the drawing board and require more evidence of student growth? Without question; its future depends on it. Goldhaber's study made that clear as well. In the meantime, should the study give pause to policymakers who love to heap bonuses on NB teachers? Maybe. But it would be much more productive to take aim at a far bigger and more expensive problem--the ample bonuses handed out to nearly 60 percent of America's teaching corps for having obtained meaningless master's degrees.