Follow us on Instagram
Try our daily mini crossword
Subscribe to the newsletter
Download the app

“We need new words”: DeepMind researcher calls for a new language to keep up with AI

friend center and cs Candace Do DP.jpg
Dr. Been Kim's lecture took place at the Friend Center.
Candace Do / The Daily Princetonian

What if the biggest barrier to understanding artificial intelligence isn’t computing power, but vocabulary? Senior Staff Research Scientist at Google DeepMind Dr. Been Kim argued at a talk on Friday that as artificial intelligence (AI) systems grow more powerful, our existing language can no longer keep up with its new developments. 

During a lecture titled “Why We Can’t Understand AI Using Our Existing Vocabulary,” Kim combined computer science with linguistics and psychology to argue that the words we use to understand AI are no longer sufficient. The lecture was based on a position paper of the same name that she wrote in collaboration with two other researchers at Google DeepMind, John Hewitt and Robert Geirhos. 

ADVERTISEMENT

At the Arthur Lewis Auditorium in Robertson Hall, Kim spoke to a crowd of approximately 200 students, faculty, and researchers. 

“We are facing a communication problem,” Kim began. “AI is doing things that seem strange, even magical. But they’re only ‘weird’ because we haven’t expanded our concepts enough to make sense of them.”

Throughout the lecture, Kim drew on her work at Google DeepMind to illustrate what she sees as a growing chasm between machine behavior and human interpretation. One case study featured AlphaZero, the chess-playing AI that baffled even grandmasters with its unconventional strategies, including sacrificing its queen early in exchange for long-term positional gain.

Kim described how she and her collaborators took those “superhuman” moves and distilled them into human-teachable “concepts,” then tested whether elite chess players could learn from them. The result? Improved performance across the chess board.

“Coaches sometimes spend a year trying to teach one new idea to a professional,” Kim noted. “That we could move the needle this much using AI is pretty remarkable.”

To bridge this understanding gap between humans and AI, Kim proposed “neologisms”: new words that capture AI-native behaviors and ideas. She added that a neologism should strike a balance: abstract enough to be useful across situations, yet specific enough to carry precise meaning. 

ADVERTISEMENT

Kim’s team coined terms to describe abstract qualities in AI behavior, such as “diverse token” to represent how varied a model’s responses are and “machine good token” to represent how confident it is in its answers. Tokens are small units of data processed by AI algorithms as components of larger data sets and are used to train and improve AI systems. Kim and her collaborators then trained language models to recognize and respond to these terms using a method called “embedding learning,” which improved the models’ performance on tasks that involved evaluating their own output. 

“Words catch on because they fill a gap,” Kim said. “We need the same kind of creative precision to describe AI behaviors — especially if we want to control them.” 

Even human-to-human language, she pointed out, is riddled with ambiguity.

“We spend our lives turning ‘the weird thing this person does’ into ‘the thing this person does,’” Kim said. “That’s what we need to do with machines.”

Subscribe
Get the best of the ‘Prince’ delivered straight to your inbox. Subscribe now »

Kim’s ideas resonated with Cecelia Ramsey GS, a PhD student in the Department of French and Italian.

“You need a new terminology so you can think about AI’s capabilities. Sometimes a new metaphor helps you think about science in a different way,” Ramsey told the ‘Prince.’

At the heart of Kim’s argument was the idea that understanding and guiding AI rely on improved communication. She referred to her work on “belief graphs,” visual representations of a model’s mental state, that can help humans and machines ask clearer, more meaningful questions about different topics. 

During the Q&A, audience members pushed Kim on how new tokens might be standardized or adjusted for accessibility. She suggested a future where AI vocabularies emerge organically within user communities and scientific disciplines.

One audience member raised the possibility of building an entirely new grammar around such tokens; Kim responded with enthusiasm.

“We teach kids mathematical notation — a completely made-up language — and it works,” she said. “Why not do the same for AI, especially if it helps us shape the world we’re building?”

“There is a need for regulating authorities to establish a standardized nomenclature, and they exist for fields like chemistry and botany,” Professor of French and Italian and Comparative Literature David Bellos told the ‘Prince.’ “You’d have to have a universal association of AI engineers meet and establish terminology subcommittees to do this [for AI].”

Kim concluded her lecture by expressing both the importance of hope and responsibility amid the increasing ubiquity of AI systems. While she is cautious about replacing human intuition with algorithms, she acknowledged that using models like Gemini to replace traditional search engines can be useful.

More broadly, she encouraged students to take ownership of the future being built around them and think critically about the technologies shaping their lives.

“You have so much power — and so little power — all at once,” she said. “But the choices you make today will shape the future we all live in.”

Adam Moussa is a contributing Research writer for the ‘Prince.’

Please send any corrections to corrections[at]dailyprincetonian.com.