The following is a guest contribution and reflects the author’s views alone. For information on how to submit a piece to the Opinion section, click here.
In her recent op-ed titled “Princeton, stop using ChatGPT,” Ava Johnson ’27 criticizes the use of generative AI at Princeton, especially in the context of schoolwork, writing that “generative AI is bad for the environment, bad for our brains, and often incorrect.” Her piece concludes that “using ChatGPT all the time makes you look stupid, because it is stupid. We’re smarter than that.” While I agree with Johnson’s overarching critique of our overreliance on generative AI in place of actual human creativity, her portrayal of the relationship between generative AI and schoolwork is misleading and one-dimensional.
Complete avoidance of AI is not a solution to the issue Johnson rightfully points out. Avoidance merely leaves us ignorant, standing still in a moving world. On the contrary, Princeton students should discuss the use of AI tools in education with honesty and open-mindedness and consider it a novel tool as opposed to a morally objectionable scourge.
In any discussion about novel, quickly developing technology, it’s important to be precise when discussing the current landscape. One of Johnson’s main points is that ChatGPT is often incorrect and unreliable, a claim she supports with a 2024 study, which “found that 52 percent of answers provided by ChatGPT are false.” However, Johnson does not mention that the study only looks at programming questions. The study also suggests that ChatGPT is only making small errors (as opposed to providing completely false answers), stating that “52% of ChatGPT answers contain incorrect information.” Indeed, ChatGPT isn’t very reliable. Nonetheless, when Johnson frames ChatGPT’s errors as an unequivocal reason to avoid the tool, she misrepresents the truth. While ChatGPT isn’t perfect, it’s still useful, and it does outperform humans on many tasks, both academic and non-academic. And it’s is improving quickly.
In addition, Johnson portrays ChatGPT as directly substituting existing technologies, when this is not the case. She repeatedly compares ChatGPT queries with Google searches, arguing that the former is rendered unnecessary by the latter. But Johnson overlooks the reality that ChatGPT has different capabilities than pre-existing technologies. ChatGPT has digested vast amounts of information as input and can reference this breadth of information in a way that no human can — an individual parsing through a Google search will assimilate nowhere near as much information as the AI model. Moreover, ChatGPT can use previous queries as data, generate answers tailored to a specific question and algorithmically improve the accuracy of their responses — things Google cannot do.
And omitting nuance in conversations about AI leads us to unproductive discussions. When Johnson argues that students shouldn’t use AI because it uses more energy than “a simple Google search would,” she flattens the discussion. She does not take into account other factors that should inform this characterization, such as the possibility that AI could help us further environmental research, as it already has.
For all that I’ve highlighted about the capabilities of generative AI, let me be clear: I strongly agree with Johnson’s concern over our reliance on AI tools. I think the use of ChatGPT for academically dishonest purposes is symptomatic of wider cultural issues in higher education, where students prioritize the appearance of competence (i.e., grades) over actual competence. When we use ChatGPT to do our work for us, we deprive ourselves of the chance to actually learn. We erode our integrity. Johnson also makes great points about the perpetuation of misinformation by AI tools, especially in our time of political polarization, and about the antisocial nature of talking to chatbots.
However, the answer to these issues is not in indiscriminately avoiding ChatGPT or in calling people “stupid.” The answer is in thoughtful, open conversation about the role of AI in our classrooms and larger lives and in recognizing that while its use is morally complicated, we mustn’t deprive ourselves of understanding a key technological development of our changing world. Is it unethical to ask ChatGPT for a more in-depth explanation of an example question in the textbook? Is it unethical to ask Claude to prompt me with questions as I brainstorm ideas for an essay? Is it unethical for my professor to ask Gemini to help them structure a lesson plan?
Maybe. Maybe not. Regardless, these questions can’t be addressed if condemnation of AI use is the centerpiece of our cultural discussions. We will not find answers to difficult questions about AI through out-of-context facts or by focusing on condemning those who are actively integrating this technology into their lives.
So, let us not stop using ChatGPT. Instead, let’s use it honestly and discuss it purposefully. Be an open and conscientious citizen of our world, even and especially in all its technological change. Talk openly with professors about their policies on generative AI and what’s motivating those policies. Play around with an LLM to understand what these systems are good at and what they can improve on. Think — really, I encourage you to take two minutes to sit in silence and think — about what you feel the trajectory of our rapidly changing future will be. and read some radically different forecasts about what the future of AI will mean for us. Like Johnson articulates so passionately, we must think for ourselves. This can’t happen if we have our heads buried in the sand, ignoring a powerful technological tool at the forefront of our cultural, political, and academic spheres. After all, we’re smarter than that.
Eliana Du ’28 is the head Cartoon editor for the ‘Prince.’ She may be reached at elianadu[at]princeton.edu.
