Follow us on Instagram
Try our daily mini crossword
Play our latest news quiz
Download our new app on iOS/Android!

Reactions: Princeton faculty discuss ChatGPT in the classroom

mccosh 50 angel kuo (1).JPG
Seats in McCosh 50, one of the largest lecture halls at the University.
Angel Kuo / The Daily Princetonian

In November 2022, OpenAI released a chatbot called ChatGPT — and immediately sparked a heated debate about the ethical use of artificial intelligence, especially in education. Trained on years of data obtained from the internet, ChatGPT garnered attention for its ability to potentially generate quirky sonnets and multi-paragraph essays, write code, and even compose music. The full implications of ChatGPT’s use are yet to be revealed, given its recent development. However, in academic circles, some have noted that students may rely on ChatGPT to cheat and plagiarize, while others point out that ChatGPT is a helpful tool for generating ideas and modeling responsible use of technology.

With this in mind, we asked a few Princeton faculty members for their opinions on ChatGPT’s role and uses, if any, in the classroom.

ADVERTISEMENT

A tool like any other

By Professor Arvind Narayanan, Department of Computer Science

The biggest impact of ChatGPT in the classroom has been on tedious, ineffectual writing exercises, such as: “What are five good things and five bad things about biotech?” The fact that chatbots have gotten good at this is great news. Fortunately, that’s not how most of us teach at Princeton, so the impact so far has been relatively mild. 

In general, though, how should we respond when a skill that we teach students becomes automatable? This happens all the time. The calculator is a good example. In some learning contexts, the calculator is prohibited, because learning arithmetic is the point. In other contexts, say a physics course, computing tools are embraced, because the point of the lesson is something else. We should use the same approach for AI. In addition to using AI as a tool in the classroom when appropriate, we should also incorporate it as an object of study and critique. Large language models (LLMs) are accompanied by heaps of hype and myth while so much about them is shrouded from view, such as the labor exploitation that makes them possible. Class discussions are an opportunity to peel back this curtain.

Students should also keep in mind that ChatGPT is a bullshit generator. I mean the term bullshit in the sense defined by philosopher Harry Frankfurt: speech that is intended to persuade without regard for the truth. LLMs are trained to produce plausible text, not true statements. They are still interesting and useful, but it’s important to know their limits. ChatGPT is shockingly good at sounding convincing on any conceivable topic. If you use it as a source for learning, the danger is that you can’t tell when it’s wrong unless you already know the answer.

Arvind Narayanan is a professor of computer science, affiliated with the Center for Information Technology Policy. He can be reached at arvindn@cs.princeton.edu.

ADVERTISEMENT

ChatGPT isn’t at our level

By Sarah Case GS ’18, Writing Program

Rather than representing the end of the college essay, ChatGPT offers an opportunity to reflect on exactly what is valuable about a liberal arts education. 

ChatGPT is a fascinating tool, and I’ve been playing around with it in relation to the assignments for my Writing Seminar (WRI 106/7: Seeking Nature). For some tasks, it’s potentially useful — it seems to be okay at generating summaries of sources, which could eventually help more advanced students speed up the research process, akin to reading abstracts before diving into a full article. But because Writing Seminar is about building skills, including how to understand sources and how to craft arguments, asking ChatGPT to summarize a source is only a useful shortcut for students who can summarize sources themselves. Without that skill, they won’t be able to take the next step. Being able to read a source and extract its main claim is the kind of analytical task that requires practice, a task that stands to benefit critical and creative thinking and problem-solving. This is the work of the Writing Seminar.

Subscribe
Get the best of ‘the Prince’ delivered straight to your inbox. Subscribe now »

Beyond summarizing scholarly sources, it seems like the technology is still fairly limited — let’s rein in the idea that all human writing is in danger of being made obsolete! After all, good writing reflects original thinking, and by its own admission, ChatGPT can only “generate text based on patterns in the data.” While noticing patterns is often the first step in producing interesting, important writing, it is only the first step. ChatGPT can’t produce original interpretations based on those patterns. So, once students have a handle on understanding sources, my goal is to introduce them to ChatGPT as a tool to help them track patterns on the road to insightful analysis and original argument — the kind of thinking that, at least for now, AI Chatbots can’t manage. 

Sarah Case is a lecturer for Princeton’s Writing Program. She can be reached at secase@princeton.edu.

ChatGPT will tank your GPA

By Steven Kelts, School of Public and International Affairs

I don’t believe any Princetonian — or college student in general — will be tempted to cheat with ChatGPT once they get to know it. You only need to spend a little time reading Twitter posts from users who’ve road-tested it to see what it’s likely to do to your GPA. It’s built on GPT 3.5, but you’ll have a 2.0. 

ChatGPT makes errors of fact, errors of analysis, and errors of organization. At the least significant level, it can’t really discern fact from fiction. More importantly, it has no standards of logic to double-check its own analysis. For instance, one user got it to explain (in its characteristic earnest legalese) the meaning of an utterly impossible genetic theory, just by giving it a made-up scientific name. 

The software organizes thoughts like a nervous high schooler who hasn’t prepared much for the AP English Language and Composition exam. It spits out one statement after another — all on topic, for sure, but in the same way that a shopping list is all on topic. 

If you happened to get ChatGPT to write an A-level essay for you once, it would write a C-level the next time — each of them in the same self-assured voice. The software also has the voice of a middle-aged compliance lawyer, so if you happen not to be middle-aged, or particularly officious in your prose, your professor with their flesh-and-blood brain will be able to tell within two lines that this was written by artificial neurons. I teach SPI 365: Tech/Ethics, and there are theories of machine learning that will tell you this type of model will never truly understand, and therefore never be able to analyze. And there are ethics of academic honesty too, which compel you not to try it. But I think it’ll come down to your self-interest: you won’t risk your grade to use a tool that will almost certainly fail. 

Steven Kelts is a lecturer in SPIA and often the Center for Human Values. He also leads the GradFutures initiative on Ethics of AI. He teaches SPI 365: Tech/Ethics. Find him online or at kelts@princeton.edu.

A ChatGPT-agnostic approach

By Assistant Professor Matt Weinberg, Department of Computer Science

Before teaching a large undergraduate course in Computer Science theory, my co-instructor Dr. Pedro Paredes and I played around with ChatGPT to get a sense of how students might use it. We were most concerned about ChatGPT solving problem set questions from scratch, so we gave that a shot first.

Every time Dr. Paredes or I queried ChatGPT and skimmed its response, I broke into a huge panic: “*&!%, it actually solved it, what are we going to do!?” (And what’s my role in society now!?) Yet when I read the responses a second time, I realized the solution was actually nonsense. ChatGPT seems to be phenomenal at producing answers that match the language structure of correct solutions (e.g. it makes good use of “Therefore,” “To see this, observe,” and “pigeonhole principle”), but the logical content is largely nonsense (e.g. it claims two is an irrational number, and 10 plus 10 is 10). Of course, detecting language structure is easy while skimming, but evaluating the underlying logic takes active thought.

Fortunately, we couldn’t find a way for ChatGPT to undermine the pedagogy of the course (I again initially panicked when we first queried ChatGPT after its “improved math capabilities,” but fortunately the answers are still nonsense). So, we ultimately decided to have a ChatGPT-agnostic policy. We put in effort explaining to students that ChatGPT solutions will be frustrating for graders to evaluate, and they’d ultimately receive lower-than-blank scores (see here and here — we also tried to share thoughts on other potential uses). Of course, large language models may get better at logic in the future, and we’ll have to adapt.

On a positive note, dissecting ChatGPT-generated solutions helps us teach the valuable skill of distinguishing between logically sound text and text that initially seems convincing but is ultimately BS — we’ve lightly incorporated this into the curriculum.

Matt Weinberg is an assistant professor in computer science. He can be reached by email at smweinberg@princeton.edu.