Skip to Content
  • Teacher Prep
  • Observing Success

    September 26, 2016

    As we discussed
    earlier this month, holding teacher prep programs accountable for the
    performance of their graduates is no easy task. The data is often scant and
    researchers usually can’t distinguish any standouts in a sea of mediocre or weak
    programs. 

    That’s why we are
    pretty enthusiastic about a new study
    from Matthew Ronfeldt and Shanyce Campbell of the University of Michigan. Previous
    studies looked only to one data source—graduates’ value-added scores—to determine the
    strength of program graduates
    . These
    two researchers use multiple measures involving, first, teacher observation
    scores and, second, value-added scores. They unearth clear evidence that others
    have not: not all programs are created equal.

    In the sample of 118
    programs, 21 surface for graduating teachers who consistently have either
    higher observation scores than most other programs, or, conversely, consistently
    lower scores. 

    The waters do get
    muddied a bit when folding back in the value-added measures. Not surprisingly,
    programs that did really well or really badly on observation scores didn’t
    always have similar results on value-added. In fact only about 40 percent of
    the programs produced observation and value added scores that were similarly
    positive or negative.

    Nevertheless, if a
    policymaker were to assess program quality by looking only at the overlapping
    data, it seems safe to conclude that there are programs clearly succeeding or
    failing–producing teachers who consistently get both great evaluations and great test score results or the
    reverse. 

    When all was said and done,
    there were #25 standout programs in the state, but as is the frustrating custom
    of academic research, these programs were not identified.  

    These promising results
    reinforce our interest in multiple measures for evaluating program quality. One
    such additional measure could be provided by TPI-US, essentially a
    comprehensive on-site inspection process imported from the United Kingdom. In its
    assessment process, teams of four trained education professionals visit prep
    programs to collect evidence on program quality as well as to provide actionable
    feedback. They observe student teachers and course instructors, examine data on
    candidate performance, and conduct interviews with key stakeholders, including
    graduates and leaders at the schools that hire them—all of which could serve as
    yet another source of data on a program’s quality. 

    More like this