Over one year after the University sent guidelines encouraging faculty to outline rules regarding ChatGPT usage in their syllabi, academic departments continue to lack department-wide policies on the use of generative artificial intelligence (AI). The Daily Princetonian reached out to the head of every department that offers an undergraduate major, and discovered that only one has a department-specific policy: Economics, which has generative AI policies for independent work within the department, according to Department of Economics Chair Mark Aguiar.
In an interview with The Daily Princetonian, Director of the McGraw Center for Teaching and Learning Kate Stanton said that the University sends a teaching memo to faculty each semester “that offers them guidance and good thinking about the semester ahead.”
“This year, that memo included this recommendation about including a generative AI policy as part of the general guidance for preparing for the teaching semester,” she said.
Stanton explained that the University did not create its own policy to give the normal deference to departments in setting their own rules and procedures.
“The university didn’t think of [ChatGPT] as falling outside, or somehow exceeding the normal practices of course policies, or of faculty discretion to set those policies,” she added.
After speaking with professors across a range of departments, the ‘Prince’ noticed a difference in the approach to generative AI between the humanities and STEM departments.
In the Department of Economics, their guidelines require students to inform their advisor in writing if they use generative AI, keep and share all records of their use of generative AI, and hold students responsible for incorrect information produced by AI.
Regarding the Department of Anthropology, Professor Glenn Shepard said in an interview with the ‘Prince,’ that his department doesn’t have a particular policy. “We were given a set of instructions and guidelines from the Committee on Academic Integrity, and we went to a brief seminar about it,” he said.
“They leave it up to the professors to … come up with common sense guidelines which should be in the syllabus,” he added.
Shepard explained that he adopted the same policy as fellow Professor of Anthropology Agustín Fuentes, who requires students to provide a specific reason for using ChatGPT.
“[His] syllabus policy, which reflects University policy,” referring to University guidance, “is [that] you can’t use it to replace your own writing,” Shepard said.
“The suggestion that we were given in this big seminar is: Try to make your exams in a context where it won’t be a problem, because when you do take-home exams, people are going to use it,” he added. Shepard clarified that this means going “back to written exams on paper, just to avoid having to deal with the problem.”
“We discussed this issue last year and decided that we don’t want our students to engage in the use of generative AI before we have a better understanding of this new technology,” acting chair of the Department of Anthropology Serguei Oushakine told the ‘Prince.’ “Right now, most faculty add to the syllabi something along these lines: ‘You may not engage in unauthorized collaboration or make use of ChatGPT or other AI composition software.’”
Engineering departments, however, take a much more liberal approach to generative AI. Visiting Operations Research and Financial Engineering (ORFE) lecturer Ioannis Akrotirianakis told the ‘Prince’ in an interview that he included a line in the syllabus of his course “where we inform the students that they can use ChatGPT and any other generative AI tool. However, for this course, we advise them not to use it.”
“It may also give wrong answers, right? So they have to be careful, especially for mathematically-based courses like the one that I’m teaching,” he explained.
Akrotirianakis added that he is not aware of any ORFE-specific policies that differ from the general University guidelines.
“Currently we do not have such a policy. Of course we are well aware of it, and design our homeworks accordingly,” ORFE Department Chair Mete Soner wrote in a statement to the ‘Prince.’ “Additionally, we plan to discuss the issue in one of the upcoming faculty meetings.”
Some departments allow students to use ChatGPT and generative AI for some assignments, but not for others. In Computer Science (COS) 333: Advanced Programming Techniques, Professor Bob Dondero only allows students to use ChatGPT during their semester-long project but not for shorter assignments.
“The purpose of the assignments is to help the students learn how to compose computer programs,” he wrote in an email to the ‘Prince’ explaining his rationale. “A student who uses ChatGPT to compose those programs will not learn as much about composing programs.”
Dondero explained that he allows students to use ChatGPT on their projects because they are intended to simulate real-world programming.
“In the real world, software engineers sometimes use ChatGPT. So I allow students to use ChatGPT on their projects. However, if students do use ChatGPT to compose some software modules, then they must tell me that they've done so,” he wrote.
While many professors have embraced ChatGPT and generative AI tools, others, like Akrotirianakis, remain skeptical, and encourage students to make their own judgments.
“My opinion is that if you want to use it, use it at your own risk,” he said.
Achilleas Koukas is a News contributor for the ‘Prince.’
Olivia Sanchez is an associate News editor for the ‘Prince.’ She is from New Jersey and often covers the graduate school and academic departments.
Please send any corrections to corrections[at]dailyprincetonian.com.