Arvind Narayanan, professor of Computer Science and director of the Center for Information Technology Policy (CITP), started working on artificial intelligence (AI) five years ago with a focus on bias and discrimination. Around the same time, Sayash Kapoor GS left his role as a software engineer at Facebook to start his Ph.D. at Princeton.
Their recent book, “AI Snake Oil,” has sparked a global conversation about the limitations and potential misuse of artificial intelligence. But it’s their approach to interdisciplinary collaboration across computer science, ethics, and policy that's been hailed as breaking new ground.
Narayanan, the current head of the Princeton CITP, along with Kapoor, conducted open-source tech policy research that has reached other academics, industry researchers, lawyers, and policymakers.
“These [disciplinary] walls that we erect never made a lot of sense to me,” Narayanan said. “Nature doesn’t distinguish between different disciplines. If you want to get to the truth, you have to forget about those disciplinary boundaries.”
Zachary Siegel ’25, a computer science student who has conducted research with Narayanan and Kapoor, said the book was a great opportunity for non-practitioners to get a good understanding of why AI works and what it can and cannot do.
“For somebody who is not an AI researcher, it can genuinely be difficult to understand how different AI technologies work in different ways,” Siegel said. “The book does a great job of breaking down the distinctions between different sorts of AI technologies, and taking an evidence-based approach to measuring AI abilities.”
Narayanan and Kapoor’s journey to AI skepticism
Narayanan’s early work focused on digital privacy in apps and websites, and cryptocurrency as a computer scientist researching tech companies’ claims. An overarching theme of his research has been the tech industry’s power and the need for social counterweights, which has informed his perspective on AI.
“While the tech industry has many critics, not many have computer science expertise to empirically identify when company claims might be hyped or false,” Narayanan said.
“I saw articles about hiring automation products claiming to infer job suitability from 30-second videos of candidates discussing hobbies or qualifications,” he continued. “It seemed too ridiculous to be true — an elaborate random number generator appealing to overwhelmed HR departments.”
“Around five years ago, I started getting interested in the question of, beyond bias, does AI work at all?”
This AI use inspired Narayanan to give a 2019 talk, “How to recognize AI snake oil,” at MIT, in which he discussed how to recognize flawed promises by companies about their AI’s potential. His slides went viral and were downloaded thousands of times and his tweets were viewed by millions.
Kapoor started working as a software engineer at Meta (then Facebook) after graduating from college, where he focused on AI theory and the societal impact of AI. At Meta, he saw how AI was being used to make consequential decisions, for example, detecting non-consensual imagery of people and predicting child sexual abuse material.
At Facebook, Kapoor also witnessed the impact of the European Union’s General Data Protection Regulation (GDPR) — a data privacy law that went into effect in 2018 — on the company. “I really saw the impact that a single set of legislation can have on an entire multibillion-dollar company, and that led to me thinking about how I wanted to affect change. And I realized that one of the best ways to do that is, from the outside.”
In 2021, Kapoor started at Princeton as a Ph.D. student under Narayanan.
Mentorship, advising, and education on campus
Beyond their research together, Narayanan and Kapoor have made the societal impact of AI a focus of their mentorship and teaching.
In fall 2023, Narayanan taught the first iteration of the course Ethics of Computing, which combines philosophical inquiry with hands-on programming assignments. Narayanan and Kapoor also teach a graduate seminar called Limits to Prediction with sociology professor Matthew Salganik.
“It’s not just about teaching students how to code or how to build AI systems,” Narayanan said. “It’s about teaching them to think critically about the implications of these technologies. We want our students to be able to ask the right questions, to challenge assumptions, and to consider the ethical implications of their work.”
Mihir Kshirsagar, a lecturer in the School of International and Public Affairs with a legal background, will assign chapters of the book in his upcoming spring class, Big Data in Society.
“[AI Snake Oil] is used in seminars with computer science and SPIA students because they speak very clearly, cutting through jargon to identify core issues,” he said. “It’s an effective teaching tool.”
Narayanan and Sayash also collaborate with the larger undergraduate and graduate student community at Princeton.
Varun Rao GS was a software engineer at Amazon before pursuing a Ph.D. at Princeton. He has worked with Narayanan and Kapoor on a variety of projects, including as an assistant instructor for Ethics and Computing in Fall 2023. For one, Rao studied the impact of AI on job displacement — concluding that while instances of job loss exist, workers also are adapting to AI-driven changes — as a part of a working group to inform the European Union’s AI Act’s Code of Practice for AI providers.
Rao said that he most admired the constructive nature of Narayanan and Kapoor’s approach.
“They not only criticize what the current state of things are, but they provide concrete and feasible suggestions on how to fix it, and I think that’s the harder thing to do,” Rao said. “I used to be at industry, and one of the criticisms that they have on this entire area of fairness and transparency and bias and is, ‘well, people just criticize, but not really offer solutions and tell us what to do or how to do it better.’”
Siegel said that he enjoyed working on specialized projects on the ability of AI agents, for example, assessing whether AI agents can reproduce published scientific papers.
“Many researchers often can end up working on their own individual projects, but Arvind and Sayash both collaborate across universities and across fields,” Siegel said. “We organized a workshop on AI agents, and we invited many guest speakers from across academia and industry.”
Unmasking AI Snake Oil: Beyond the Orange Bubble
Narayanan and Kapoor have shaped policy discussions around AI, offering a nuanced view of its capabilities and limitations.
Sujay Swain ’25, an Electrical and Computer Engineering student, worked with the Federal Trade Commission last summer funded by the CITP Siegel Public Interest Technology fellowship. He mentioned that Narayanan and Kapoor’s Substack blog on AI, which came before the book, was widely referenced at his internship.
“It was a resource that people from all over were really interested in and used as a reference point,” Swain said. “I think it was really cool to see the broader influence of the work that Princeton is doing on tech policy.”
Rao recalled that from his experience, industry professionals criticized the AI fairness field for not presenting clear solutions to the problems they raised.
“A lot of Arvind and Sayash’s work and things in the book touch on solutions,” he said. “I found that to be really fascinating.”
Narayanan and Kapoor are also involved in the Princeton AI Dialogues and AI Policy Precepts, offered in collaboration with the School of Public and International Affairs in Washington. The two non-partisan programs promote conversations on the core concepts, opportunities, and risks underlying the future of technology for federal policymaking.
“Arvind and Sayash are really pushing the boundaries of what we consider computer science education,” says Steven Kelts, a CITP lecturer who teaches tech ethics at Princeton. “They’re showing that it's possible to combine rigorous technical training with a broader understanding of societal impacts.”
Looking forward to the long-term future of AI, Narayanan said that he was optimistic — but it was crucial to understand its limitations.
“Many AI success stories are around us. Things like autocomplete and spell check were once cutting-edge AI,” he said. “Self-driving cars were overhyped but are now real, with taxis giving rides to millions. Eventually, everyone will have access to them, reducing road accidents, which cause about a million deaths per year worldwide.”
Chloe Lau is a staff Features writer for the ‘Prince.’
Please direct any corrections to correction@dailyprincetonian.com