In Summer 2023, Princeton’s first official statement on the use of generative AI was a memo penned by former Dean of the College Jill Dolan, Senior Associate Dean of the College Katherine Stanton, and Associate Dean for Academic Advising Cecily Swanson. Beyond clarifying policies regarding citation and prohibition of AI tutoring services, the memo gave faculty the responsibility to develop their own AI policies.
With Princeton’s decentralized curriculum, faculty are given complete independence in what they choose to teach, which is ideal within an institution committed to the freedom of expression. However, when making decisions on how AI can be used, our faculty must accept its undeniable relevance, prioritizing the preparation of students for entering a changing workforce and complicated ethical climate. Professors should include concrete language regarding the responsible use of generative AI and its implications on their syllabi.
As a member of the Integrated Science Curriculum (ISC) — a small first-year course combining physics, chemistry, molecular biology, and computer science — I have been encouraged to use generative AI in working through weekly collaborative “C-Sets.” As a first-time coder, this encouragement has been particularly fruitful in learning Python. Intentionally prompting generative AI to conjoin isolated pieces of knowledge has been constructive in the education of each ISC student.
In conversation with peers, I have learned that my experience during the ISC is far from the norm for most Princeton students. Particularly in the humanities, many faculty expect students to not engage with any generative AI technology. For example, many first-year writing seminars completely restrict the use of AI. I interviewed two of my instructors to better understand the faculty’s feelings.
Ben Zhang GS ’22 of the ISC recognizes the need for limitations on generative AI in lecture classes, with the biggest concern being that “ChatGPT is not the answer to doctrine.” Students often view their AI usage as a “retrieval problem,” attempting to engineer their prompts instead of understanding the material.
Associate professor Flora Champy of the Department of French and Italian worries that allowing AI in academic institutions will train students “who are not able to question the opinions that are thrown on them.” As someone who was raised for most of her early life with no phone or access to a computer, she believes “the [students] who use technology the most efficiently are the ones who know how to live without it.” However, she admits that she is not yet familiar with how to work generative AI and has not had the opportunity to experiment with its potential uses in the classroom.
The fear of a loss of self-sufficiency with growing reliance on technology is valid, especially coming from scholars who have largely known a world void of digital assistance. However, faculty cannot shield their classrooms from technological progress. The proper solution to a confusing and powerful technology is rarely complete prohibition. As students throughout all majors look forward to futures filled with research, computation, and innovation, it would be ignorant to not address generative AI’s burgeoning relevance. As of October 2024 — just under two years following the release of ChatGPT — one-third of employees report their employer has taken action on the use of AI. Such a dramatic shift suggests the urgency with which we must integrate responsible generative AI use into our education. Employers expect applicants well-versed in AI, so failing to train students to use this tool will leave students unprepared and unrestrained — practically and ethically.
Zhang argues “prohibition is possibly worse than laissez-faire,” or allowing students to use generative AI with no limitations. For courses in which professors ban AI tools, it is only realistic to expect that “people are going to use [AI] frantically, without guidance,” playing into the hands of the tech developers who “have a vision of the world where there’s ‘one brain.’”
The complete prohibition of AI in certain classes is teaching students to use AI without detection, not how to succeed without it. Instead of being shown how to use it as a supplement to scholarship, such as by finding sources for further research, it has been judged as an indubitable threat to academic integrity. However, I encourage Princeton’s faculty to collectively consider how they may be doing a disservice to students by keeping us from realizing the coexistence of academia and AI.
The power and potential threat of generative AI is not to be taken lightly. No matter how supportive you are of its use, you likely share the same fears as everyone else: generative AI can breach private information, provide faulty or even made-up citations in the form of hallucinations, and have a history of reflecting racist biases. Yet these fears will only continue to dominate the AI conversation if we do not first learn how to recognize and address them. Ignorance feeds fear, and if we are educating our community to be ignorant of the complexity of AI, we will find ourselves incapable of responding to its abuse.
Princeton as an institution is not ignorant of the ethics at play. The McGraw Center for Teaching and Learning summarizes the complexities behind generative AI ethics in just under 600 words on its website, ranging from environmental threats to human rights violations. This page comes off more as an attempt at protecting its reputation than an expression of genuine concern. No solution is offered, only the weak suggestion that faculty engage in these conversations. Given that Princeton recognizes the global significance of generative AI, why have they not centered ethical AI education in campus values? Further, how can it be expected by some faculty that students will avoid its use as a whole?
During our interview, Zhang was most personally concerned with the lack of personal accountability surrounding AI ethics, reminding students that “as your keystroke is typed, the data centers that hum thousands of miles away are a direct result of pressing that enter.”
As members of the academic community, we all play a role in utilizing resources sustainably and informedly. Whether or not we are proactive in this role, generative AI capabilities will continue to grow. By ignoring it, we are only allowing the problem to continue to fester.
Ryan Moores, a member of the Class of 2028, comes from Colchester Conn., and plans to major in Neuroscience. He can be contacted at rm3719[at]princeton.edu.