Input Substitution and Grade Inflation
I have a colleague whose father, a highly respected historian, taught for many, many years at a university we have all heard of. I have also been working on a report concerning grade inflation at the University of Oregon (we have it, are average in terms of severity, and input substitution appears to play a role) and he thought I might be interested in seeing his father’s grade book. This is a principles level history course taught in 1949. After showing the grade book, I’ll calculate some statistics and give you an idea how this compares to grade distributions today. I'll also present some statistics for the U.S. overall since the mid 1960's and some preliminary results from the study I'm doing.
When my colleague showed this to one of his students, the response was an incredulous "Whoa, dude, this is for a history class?":
How does this compare to today? The mean grade for this distribution is a 2.22, between a C and a C+ (ignoring + and -, the numbers are A-13, B-45, C-54, D-19, and F-9; there are 140 students). Here are some statistics for the U.S. for comparison (these are from www.gradeinflation.com which presents additional statistics as well as links to the source data for each school):
There are two episodes that account for most grade inflation. The first is from the 1960s through the early 1970s. This is usually explained by the draft rules for the Vietnam War. The second episode begins around 1990 and is harder to explain. High school GPAs rise during the same time period (entering students at the UO had a high school GPA of 3.30 in 1992, 3.31 in 1996, 3.37 in 2000, and 3.47 in 2004 while SAT scores remained relatively flat, though they did increase modestly in math).
My study finds an interesting correlation in the data. During the time grades were increasing, budgets were also tightening inducing a substitution towards younger and less permanent faculty. I broke down grade inflation by instructor rank and found it is much higher among assistant professors, adjuncts, TAs, instructors, etc. than for associate or full professors. These are instructors who are usually hired year-to-year or need to demonstrate teaching effectiveness for the job market, so they have an incentive to inflate evaluations as much as possible, and high grades are one means of manipulating student course evaluations. I used a market basket approach much like with CPI inflation where the basket was a set of courses highly influential in the average students GPA to try and separate "real" from "nominal" changes in grades over time. Changes in course composition, student composition, faculty composition, and institutional rules were all examined, and changes in faculty composition associated with tighter budgets was an important factor. But that does not explain all of the inflation that I observed in our data and I am looking into this further over the summer.
Update: Here's one measure across faculty rank, %A (other measures also show this). We have three levels of courses, level 1 is principles and level 3 is upper division. Here are the numbers:
Full professors (%A in Levels 1, 2, 3) 26% 31% 35% Assistant professors (%A in Levels 1, 2, 3) 30% 45% 42% Adjunct professors (%A in Levels 1, 2, 3) 38% 50% 42%
For comparison, the grade distribution from 1949 given above has a %A of 9% for a level 1 class (13/140).
Posted by Mark Thoma on Saturday, June 18, 2005 at 12:42 AM in Universities, University of Oregon |
Permalink
TrackBack (0)
Comments (0)
You can follow this conversation by subscribing to the comment feed for this post.