The public’s
response to NCTQ’s recent report, Training
Our Future Teachers: Easy A’s and What’s Behind Them, was generally
positive. Clearly, many teacher prep graduates (or people who know graduates)
agree with our central finding: teacher prep is not nearly rigorous enough to
prepare candidates for the challenge of teaching. However, a commentary by Dean Donald Heller of Michigan State University published in the Chronicle of Higher Education, lists
several points of criticism about the report and the ratings of institutions
that accompanied it. We address each point below.
- Our analysis uses commencement brochures, which
often identify graduating students as having earned honors based on meeting or
exceeding a grade point average (GPA). Given that our data source offers this
proxy for GPAs (see page 2 of the report),
Dean Heller dislikes our use of the term “GPA differential” to define the
difference between the proportion of teacher candidates earning honors and that
of all undergraduate students earning honors.The question is whether using honors
as a proxy for GPA rather than GPA itself makes a difference. Heller contends
that it does, noting that all we know is whether a group of students fell below
a certain GPA cut-line, not how far
below they fell. He suggests that the difference could be as small as a hundredth
of a point. But given that our analysis generally aggregates hundreds or even
thousands of students, this is implausible. Student GPAs are distributed across
a range. It’s virtually impossible that education school students would cluster
just above the honors cut line, while most students in an institution cluster
just below it. Moreover, while we grouped all levels of honors (generally Latin
honors) together, we know that many students earned higher levels of honors (e.g., summa cum laude) which require much higher GPAs (often a 3.9 or
higher). In short, our proxy measure is indeed tracking a meaningful underlying
GPA differential between education students and all students. -
Given the structure of many education programs,
secondary teacher candidates often have content majors outside the department
of education. When we rate institutions based on the difference in the
proportion of honors among teacher candidates compared with all undergraduates,
in some cases it is clear that we cannot identify or include those secondary
candidates who are housed outside the department of education. To address this
limitation, we use a more generous scoring rubric to differentiate between
meeting and failing the standard for those institutions with less detailed commencement
brochures. Dean Heller claims that our approach for dealing with these less
detailed commencement brochures “fudges the issue.” To the contrary, we tested these two
approaches with 50 institutions (not 29, as Dean Heller stated) to see if the institutions
with less detailed information were at a disadvantage. In fact, the vast
majority of ratings were the same or better, and only four institutions had
lower scores when rated on less detailed information (whereas Dean Heller
incorrectly stated that the rating was different – and implicitly, worse – in
six cases). (For more information, see page 3 of the methodology) -
Among many possible explanations that would
account for their higher grades at graduation, we considered the prospect that
teacher candidates are academically stronger than their non-candidate peers at
the same institution. If this were the case, those teacher candidates should
also earn higher grades in the general education coursework they, like all
students on campus, take in the first few years of college. Our report
considered four studies dating back to the 1980s that looked at whether future
teacher candidates earned higher grades than other students. The findings were
mixed, but even those that found that future teacher candidates earned higher
grades found no more than a tenth of a letter grade point difference. A more
recent and larger data source, a National Center for Education Statistics
survey of over 16,000 students, found that teacher candidates’ grades in their
first year of college are roughly equivalent to those of their peers. In short,
those studies suggest that the GPA differential only appears after teacher candidates enter
preparation programs, indicating that the cause lies in the programs
themselves. Given this research, it’s unclear why Dean Heller states that we
have not disproven that teacher candidates are academically stronger than their
peers. -
The central thesis of our report focuses on the
evidence that education professors are systematically assigning more of a different kind of work –
criterion-deficient assignments. Our extensive statistical analysis of course
assignments and course grades, an analysis completely unmentioned by Heller,
shows that these overly broad or subjective assignments are associated with
higher grades – and that they are twice as common in teacher preparation as in
other academic areas.Rather than address this large topic
of assignment type, Dean Heller makes much of an endnote in our report which,
admittedly, was poorly phrased. This note said that the cause of higher grades
in teacher prep was not lax grading standards – which Heller takes to mean that
we don’t believe the main point of our own report. By “lax grading standards”
we meant that education professors would look at student
work that other professors would award a C and give it an A. As we explain in
the report, we do not believe that lax grading standards are more prevalent in
teacher preparation than in other coursework. We assert that it is the type of
assignments given, rather than the grading standards applied, that reduces the
rigor of preparation programs. -
As we noted above, teacher candidates and non-
candidate students enter their junior year with roughly the same grades, with
both groups having taken a wide variety of courses in academic disciplines outside of teacher preparation. But in
the remaining two years of their college careers, the teacher candidates’ GPAs
suddenly rise to a point that their rate of earning honors is roughly 50
percent higher than that of all graduating students. The more courses that
teacher candidates might have to take outside
of education during their last two years, the more likely it is that high
grades in education courses must be
the explanation for any large differential in honors awarded. Heller makes the
rather baffling claim that our methodology does not consider that many
education majors take courses outside the college of education. To the
contrary, to the extent that candidates who take such courses still earn
honors, it only underscores our point about the significant impact that higher
grades in education courses can have on overall GPAs.
In conclusion,
Dean Heller’s analysis is long on criticism but short on accuracy. He ignores
the most important feature of the study about the nature of assignments in
teacher prep and misrepresents many of the key points of Easy A’s, including how we tested the rating of programs with less
detailed information, and the implication of teacher candidates taking courses
outside the college of education. Certainly, the Easy A’s report opens the door to more questions and future
research. However, the conclusions it reaches and the recommendations it makes
are solid and resonate with the many teachers who have read the report.