As we sort through the data from the first edition of the Teacher Prep Review, one takeaway for teacher prep accountability systems continues to emerge: outcomes data from a field that is structurally weak is insufficient.
This map provides one illustration of the problem. In the 13 states in red, not a single elementary teacher prep program was identified as having earned top marks on the most fundamental skill of elementary preparation -- learning how to teach children to read. (For more information about our early reading standard, see the Early Reading Standard Findings Report).
Consider the implication: If a state like Maine or Oregon were to compare programs using data on the reading performance of graduates' students, some program would look like the exemplar when that program is, in fact, failing to provide fundamental skills to its candidates. It's like someone winning a race by running a 10-minute mile because the field runs 11-minute miles.
To assess and improve teacher prep programs, we need information about both what training candidates receive and how graduates do in their own classrooms. As states work to expand on outcomes data available, we look forward to measuring more programs on our Evidence of Effectiveness standard. But to truly elevate our teacher prep programs to the level of other countries (whose graduates would show up as 4-minute mile runners), input measures of structural soundness must be observed too. They provide the necessary context for areas in need of improvement.