Teacher prep value-added models aren't even at stage 1.0

See all posts
Last Friday's announcement from the Education Department at this Ed Sector event is intended to significantly change what the nation's 1,400 teachers' colleges and university programs have to report to Washington.  They'll now have to include "how much the graduates help students learn once they get to the classroom, based on their students' test scores."  Yet another reason why states are feeling pressure to get their longitudinal data systems up and running. 

Unfortunately, the increasing policy appetite for data exceeds states'  current technical capability—not to mention that there are problems that haven't even been ironed out at a theoretical level.  Three cases in point are the examples of Louisiana, Florida and Tennessee, early adopters both of teacher prep evaluation systems.  Louisiana's system is the granddaddy of them all — with a history of just five years. Testifying to the difficulty of getting innovation right on the first try, Louisiana is now scrapping its current system and replacing it entirely. Florida has reported to teacher prep programs on their graduates' effectiveness for two years, but will now need to develop another system compatible with a new teacher evaluation system—plus insiders there are complaining that the scores have proven so unstable they're hard pressed to conclude much of anything. Tennessee's publication of its data was commonly viewed as "incomprehensible." 

Even as these three states rejigger their original systems, most others have yet to put a first one in place.  With no fewer than eight models considered by the Florida stakeholder panel charged with selecting a system, you can imagine the many variations possible in this arena as each state chooses its own path.    

Julie Greenberg