It’s always interesting to hear a professor’s policy or opinions on ChatGPT. Some strictly prohibit it, some allow it with proper disclosure, and some condemn its inability to be intelligent — or even accurate. I usually don’t pay much attention to these warnings, as myself and the people around me rarely use ChatGPT in lieu of making our own effort on schoolwork.
But research shows that Americans are increasing their use of generative AI software like ChatGPT, and college educated people use it at rates higher than any other educational demographic. One third of college-aged Americans use ChatGPT regularly for work, entertainment, and everyday search engine functions.
Don’t get me wrong, it’s fun to talk to the AI chatbot and craft the occasional hilariously absurd image. But overusing ChatGPT regularly is bad for the environment, bad for our brains and educational enrichment, and generally unnecessary. Furthermore, as students who are at Princeton to learn, it is a shame to intentionally subvert the skills and abilities Princeton seeks to instill in us. Therefore, we should all make efforts to limit the use of AI chatbots like ChatGPT.
Many uses of ChatGPT are worse versions of doing things that we’ve always done. Asking the AI language model for relationship advice certainly cannot be better than going to a good group of friends. Asking silly questions like, “Do fish have feelings?” can certainly be done with Google. And summarizing readings you forgot to do for class has been happening long before we had generative AI. AI is killing the age-old tradition of asking a friend for a 30-second summary on the walk to class.
I’ve seen people use ChatGPT for each of these purposes, and I’m sure there are countless examples of members of our community unnecessarily outsourcing their questions and problems to the model.
These kinds of socially negative uses for generative AI have also permeated Princeton specifically. One recently developed TigerApp, Tay, is an AI assistant that claims to know all “academic, eating club, and event information in real time.” But Tay is also advertised as providing insight into course recommendations and Princeton-specific practices like bicker — all things that you would be better off asking a friend in the know. One of the beautiful things about college is access to a huge social community, right outside your front door. Why outsource that to AI?
This is especially concerning, because AI might not be correct. A study last year found that 52 percent of answers provided by ChatGPT are false. Although new models have gotten up to 88 percent correct answers, a 12 percent fail rate is not nothing, especially for questions that are an easy Google away.
Not only does pursuing instant production of answers often direct us to incorrect information, it is also bad for our minds. While there are undoubtedly some situations in which use of AI chatbots is positive, research at the National Institute of Health has shown that over-reliance on the tool is becoming prevalent. And when young people are quick to turn to chatbots to think for them, they stop learning to think for themselves.
When chatbot tools like ChatGPT are consistently used in place of basic critical thinking and simple mental tasks, we lose our ability to complete these tasks ourselves. Even Googling takes more effort than ChatGPT. Sifting through information and links for oneself, rather than relying on the bot to curate it for you, is a good mental exercise that likely produces better results. Reliance on these tools is damaging to our critical thinking — an especially dangerous side effect in a political climate filled with misleading or untrue information.
And not only is AI bad for our brains, it’s bad for our planet. The bulk of environmental impact comes from data centers which power the tool, both in training and everyday use by the public.
Training ChatGPT3 used 700,000 liters of fresh water: the daily usage of more than 2,000 people. Individual interactions with the chatbot also use significant amounts of water. Each query posed to the model wastes 10 times the amount of water that a simple Google search would.
And then there’s the power: NPR reports that “one query to ChatGPT uses approximately as much electricity as could light one light bulb for about 20 minutes.” Multiplied by the daily uses of the software, it adds up.

To sum it up, generative AI is bad for the environment, bad for our brains, and often incorrect. These facts should be enough of a reason for anyone to try to limit their use of the tool, but they should especially compel Princeton students to do better.
We are privileged to get an incredible education from an amazing institution, we know how to use Google. This is not to say that it’s wrong or bad to ever use ChatGPT to create the occasional funny image or to ask a last-minute question which Google can’t answer. But using ChatGPT all the time makes you look stupid, because it is stupid. We’re smarter than that.
Ava Johnson is a sophomore columnist and Politics major from Washington, D.C. Her column “The New Nassau” runs every three weeks on Thursdays. You can read all of her columns here. She can be reached by email at aj9432[at]princeton.edu.