Irony in the Battle over Teacher Effectiveness
by Dan Goldhaber and Jane HannawayThe August 16 Los Angeles Times article on the effectiveness of LA teachers has created a major brouhaha. The paper promised to publish measures of individual teachers' effectiveness. Drastic change in school management may well be in the making, but not without a battle royal. Powerful sides are lining up and the stakes are high.
The U.S. Secretary of Education and numerous superintendents endorsed the idea of letting parents know the estimated effectiveness of their children's teachers, but the local teachers union is organizing a boycott of the LA Times, and pressure to keep teacher performance estimates under wraps is mounting. There is good evidence that pressure works. In New York, unions previously got the state legislature to add a clause in a budget bill that prevented using student achievement to inform teacher performance evaluations.
Behind this firestorm is deep research. Evidence has long shown tremendous variation in how effective individual teachers are. The difference between having a top-notch and a bottom-drawer teacher can be more than a year's worth of learning growth. Such findings, corroborated by principals' casual observations, hold true across states, districts, researchers, and tests. Yet school districts rarely act on it--for good reasons and for bad.
Much of the debate over evaluating teachers has centered on a technical point--the use of so called value-added estimates of effectiveness that use student test scores. This is wonk speak for a statistical approach designed to separate out teachers' contributions to student achievement from other influences on learning. Opponents of this approach think the measures are misleading and overcomplicated, and, at a deep level, question whether student tests should be the basis for evaluation anyway. Proponents seek fairness and say that only a sophisticated approach like this can distinguish between a teacher's influence and what a student brings into a teacher's classroom. Critics argue that tests capture only part of what students learn while advocates claim that what is tested is still important. Opponents argue that the estimates are not accurate. Proponents counter that, while not perfect, the measures contain valuable information--some of it predictive of future performance--worth considering along side classroom observations and other indicators of quality.
School districts avoid using this information largely to avoid political heat, especially from unions. But the big stir caused by the Times' plan to make it public could have been skirted entirely. Had the LA School System used the value-added estimates to inform their evaluation of teachers, the data would have been part of teachers' confidential personnel files and not open to the public. Instead, the district long ignored its own data, leaving it fair game, embarrassing many teachers, at least some of whom could have used the data constructively in a supportive setting.
The deeper story within the story here is that it's risky for school districts to ignore teacher performance information. But the kerfuffle in LA is also an opportunity for districts and unions to use the external pressure to move past the tired debates about whether value-added ought to be used, and toward a discussion of how best it might be used in conjunction with other measures to assess teachers.
We are at a crucial decision point. The worn-out politics of the past can only hold the educational enterprise back, harming teachers caught in ugly political battles, and kids stuck with the weakest teachers.
Dan Goldhaber is the Director of the Center for Education Data & Research at the University of Washington Bothell. Jane Hannaway is the Director of the Urban Institute's Education Policy Center and CALDER, one of the federally funded National Research and Development Centers