Grading is a subject of great mystery and concern at most schools, including Princeton, especially as finals season approaches and the end of the semester draws near. But conversations around grading reform are less prominent, and the Princeton community has yet to sufficiently grapple with the important questions of this debate. Are the systems we have in place conducive to student learning and growth — or are they harmful to those objectives? And do they constitute an accurate standard for assessing students’ academic progress and achievements (if such a standard is even possible)?
The grade point average (GPA), which aggregates a student’s grades across all of their coursework, is, by most accounts, an imperfect and flawed tool for investigating the aforementioned matters. Its ubiquity, unfortunately, isn’t diminishing, so we must revise it in order to better meet our educational goals of fostering an appreciation for (and understanding of) a diversity of disciplines, developing an open and critically discerning mind, and, yes, serving humanity. To accomplish this, we should contextualize Princeton’s grading benchmarks by providing median grades for each course alongside students’ earned grades on their transcripts. This method, also adopted at other colleges, will give a fuller account of a student’s academic circumstances, the nature of their coursework, and their relative successes, to clarify what may seem at times arbitrary and random.
Median grades put a student’s performance in a class in the context of the overall course’s performance. Their introduction would be a quick, effective way of allowing students themselves — plus employers and graduate school admissions offices — to see and compare the student’s achievements in their courses to those of their peers. Quite often, low or unusual grades can puzzle transcript reviewers — were they flukes or signs of poor work? Median grades, in tandem with corroborating documentation (e.g., on a family emergency or severe illness), can answer these doubts, noting how abnormal a student’s performance actually was. Reporting the contextual median is thus a safeguard against potential misinterpretation of a student’s grades and serves to aid both the student and those requesting their transcript.
This metric benefits Princetonians not only beyond the University, but also while they’re on campus. A ramification of contextualizing grades through medians is the encouragement of academic venturing; students who might’ve otherwise feared taking a course interesting to them — because of the chances of receiving a “bad” grade (lacking any supporting explanation) — might be more inclined to go ahead and take a “risk,” if their individual grade were contextualized. This may, as a result, reduce stress and anxiety levels surrounding grading, and help build an environment of learning for learning’s sake, rather than for the sole pursuit of a floating letter. Posting medians, then, would assist students in their journey of broad exploration, part of Princeton’s mission of bestowing a comprehensive liberal arts education.
There, too, is a rationale for relying on the median, instead of another measure of central tendency, i.e., the average. While all measures are fairly easy to collect — and indeed, many classes at Princeton gather such data or make them known to their students — the median describes the middle value of a frequency distribution of grades and is less likely to be highly skewed by outliers, as an average would be. To guarantee more accurate sample sizes, reporting medians would be mandatory for courses with, say, 10 or more students enrolled, and optional for those with under 10 students. This procedure wouldn’t apply, however, to departmental independent work, e.g., junior papers or the senior thesis, due to the personal character of those components. Classes taken Pass/D/Fail or “Audit” would be exempt as well. Below is a prototype:
Course Title Grade Median
Course #1 B+ A
Course #2 A- A-
Course #3 A+ B+
Course #4 C B-
This isn’t an outlandish or outrageous proposal, for we already recognize that it works. Institutions of higher education across the country, like Indiana University and the University of California, Berkeley, now track average or median grades — in fact, the former goes further, detailing the “percentage of students who are majors in the given course department,” the grade distributions per section, and so forth. Cornell University, as yet another example, currently uses a system of contextualized grading through the reporting of class medians on student transcripts, as has Dartmouth College since 1994.
The merits of medians don’t just accrue for students and transcript viewers, though — they also aid faculty members: recording them would enable Princeton instructors to tweak and adjust their courses, if necessary. For instance, a median grade far lower than in previous years’ iterations might not be a total warning sign, but a persistent trend of significantly decreasing median grades might indicate a need to change something about the course (or students’ quality of output). Medians would function, then, as another form of feedback on a course’s structuring, pacing, and elements — a kind of self-evaluation. And if faculty members had this data for all courses, they could check for and ensure consistency of grading across academic fields, thus monitoring for and controlling rampant grade inflation or wild fluctuations.
As Dartmouth has done, these median figures shouldn’t be publicly circulated (e.g., on the Registrar’s website) in order to deter students from attempting to game course selection and purposely pick “easier” classes. After all, the intent of implementing this design is to facilitate academic risk-taking, much like the present P/D/F policy, lessen the focus on the letter grade itself and place more emphasis on the fun and enjoyment of learning. The intellectual rewards of wrestling with, and eventually mastering, unfamiliar or difficult material should be more treasured than the final letter grade. Therefore, students should only have statistics about classes they’ve finished taking.
Of course, the transition to a contextual GPA may come with its own complications. The biggest foreseeable troubles are on the logistical front. For a case study, look no further than the University of North Carolina at Chapel Hill, which killed its contextual GPA in 2017, despite years of efforts towards launching such a system. The UNC administration concluded that adding extra columns to the transcript and revamping the registrar’s site would be infeasible, citing “technical challenges” and “prohibitive costs.” Still, for Princeton, an educational institution with ample resources, these shouldn’t be large issues — indeed, they pale in comparison to far more serious problems that have to be dealt with (e.g., the conversion of certificate programs to minors or campus construction).
Contextualizing the student GPA through median grades grants every stakeholder involved a win: students have improved access to information at their fingertips about their performance and are incentivized to engage in academic exploration and risk-taking; faculty can use the data to make modifications to, and prioritize consistency in, their courses; and employers and graduate school admissions committees have a fuller picture about students’ work in each course and the personal circumstances that reflect upon that work.
This humble suggestion ought to be merely the start of an important discussion about grading reform at Princeton — one which we can hope won’t cease anytime soon. Adopting contextual grading through class medians, as outlined above, will prove an excellent first step towards demystifying the student GPA and grading process at Princeton.
Henry Hsiao is a first-year contributing columnist and assistant Opinion editor from Princeton, N.J. He can be reached at henry.hsiao@princeton.edu.