In just a couple of weeks NCTQ will be releasing an updated set of ratings for graduate and alternative route teacher prep programs. These results are made all the more interesting by the release yesterday of some timely and promising data finding that programs are indeed making positive changes to improve the quality of their preparation.
The focus of a new CALDER working paper from economists Dan Goldhaber (of the University of Washington) and Cory Koedel (University of Missouri-Columbia) is actually on an ineffectual experiment they conducted in 2013, about a month after the first set of NCTQ ratings were released. That experiment involved reaching out to a subset of programs NCTQ had just rated, making suggestions on how they might improve their ratings. Given the intense animosity toward the ratings at the time of the first release, it wasn't all that surprising that most programs responded with "thanks, but no thanks"--and some not nearly so politely.
What we find more interesting and indeed exciting is a broader finding from their work, having little to do with the experiment. They find substantive early movement in the right direction among the full set of programs. Between 2013 and 2016, elementary programs were nearly twice as likely to make a change that led to an improved rating as one that hurt their rating.
Not only is there movement in a positive direction, the nature of the scoring changes also appear promising. Given that most of the changes were improvements of a point or so on our four point scale--as opposed to what would be suspiciously large jumps had programs leaped from a 0 to a high score of 4 in a few short years--programs appear to be making legitimate progress.
This study provides external confirmation of what we are finding. We find big gains in the number of programs delivering scientifically based instruction in early reading, a 33 percent increase since 2013 in the number of programs now meeting our reading standard.
At the outset of producing the Teacher Prep Review, a lot of skeptics warned us that institutions would never respond to our ratings. Certainly that is what the outcome of other reform efforts would have you predict. Large scale, much more costly efforts that were undertaken by the federal government, state governments, private foundations, and the field of teacher education itself (e.g. The Holmes Group, CAEP's struggle to raise standards) have all come up empty to date.
These data provide the first piece of evidence that programs--faced with the right incentives and, to be honest, disincentives--may be willing to change.
Of course, because these results were not part of the actual experiment, they do not provide evidence of causality. Still, NCTQ has been at the forefront, particularly pushing on programs to deliver evidence-based reading instruction. We have also taken the lead in pushing states for over a decade to raise their standards for program approval, which resulted in a number of states insisting that their programs become more selective. While we cannot definitively assert that we caused these improvements, we think it is highly likely that the Teacher Prep Review played a substantial role in moving the ball yards--not inches--toward the goal.
Our inspiration for the Teacher Prep Review remains Abraham Flexner who in 1910 rated the nation's medical schools, helping to precipitate dramatic improvements in how doctors were trained. Flexner understood the importance of providing actionable, measurable indicators of performance at the level where decisions get made. Those principles are just as true for teacher preparation as they are for medicine.
In addition to this progress, we can also report that we've learned a lot over these past five years, becoming more responsive to programs' legitimate criticisms over how our process has played out in the past. We are now looking forward to the next five years, accelerating our progress under our shared commitment to giving future teachers the best start possible.