University President Christopher Eisgruber ’83 and Governor Phil Murphy announced plans to establish a hub for artificial intelligence (AI) activity in New Jersey in collaboration with the New Jersey Economic Development Authority (NJEDA) at an event in East Pyne on Monday, Dec. 8.
According to the press release from Murphy’s office, the initiative will advance research and development, house a dedicated accelerator space, advance the use of ethical AI for positive societal impact, and promote workforce development to support new technology development.
Among the topics were Nokia Bell Labs’s decision to move to a location closer to Princeton. While Bell Labs does not currently have the research centrality that it had in the mid-20th century, it is a historic home of innovations, and the place where the transistor was invented. The initiative fits into Eisgruber’s effort to build Princeton as a technical hub, which he highlighted in an interview with the ‘Prince’ last November.
The Daily Princetonian spoke with Eisgruber and Murphy before the announcement of the hub’s establishment.
This interview has been lightly edited for clarity and concision.
Daily Princetonian: What do you see as the goals of the center? Would you describe it as trying to build large artificial intelligence systems, or working on supplementary work to those larger systems such as ethical considerations and economic implementation?
Christopher Eisgruber: A strength of Princeton University is that we have world leaders in the creation of artificial intelligence systems who are going to enable us to push into the future — and we do that within the context of a broader university that cares deeply about the liberal arts in the human condition. I think this is shared between Governor Murphy and me, that we need to be able to do this research and lead this initiative in a way that thinks about how AI can make a difference, for the better, and at the same time, integrates us into a broader regional ecosystem.
Phil Murphy: I could not agree more. Remember the movie Everything Everywhere All At Once? Everything is on the table as it relates to generative art and artificial intelligence. Everything from data centers all the way to ethics and regulation, talent is littered throughout that.
DP: I want to ask you about the possibility of artificial intelligence. Any researcher working on AI will tell you that they don’t really know what they’re building. They’re writing this code, they know what they’re working towards [in the short-term], but they don’t know what’s going to happen in the end. In the development of this technology, how can we be sure that as our motto says, what we’re doing is really in the service of humanity? How do we know that what we’re doing isn’t going to come back to bite us?
CE: That’s a question we address with regard to any research that we do, right? The extraordinary thing about research and scholarship at the frontiers of knowledge is that it’s exciting because you don’t know where it’s going. That’s true about AI. It’s true about things that we’re doing in bioengineering. It’s true about things that we’re doing in quantum science. It’s why it matters to have universities that hire people who have an interest in the future, in the development of technology, and in doing that in a way that makes a difference for humanity for the better.
That’s what distinguishes our faculty. It’s why it’s better to do it in a partnership between our government, which also cares about those kinds of issues, and a university, rather than having this research be done only in places that are maximizing profit. Governor Murphy and I share a different set of aims. We want prosperity for the state of New Jersey, we want to push the frontiers of knowledge, but we want to make a difference for the better.
PM: I would say Amen to all of that. To suggest that guard rails would not exist in the absence of this hub would be a mistake, but if we advance our endeavors in that respect with this hub, I think we will have a much better shot at getting that balance right. Again, I think a partnership like this, where our objectives, our motives are not profit, it’s for Princeton and for the benefit of humanity. And for the 9.3 million residents of New Jersey that this impacts positively — everybody, not just a select few.
DP: Thank you. I want to ask you about those residents of New Jersey. A recent Goldman Sachs report, published in April, said that generative AI could increase GDP by seven percent but at the same time displace around 300 million jobs worldwide. This is a common concern about new technology, as I’m sure you're familiar with. How do we ensure with this hub that what we’re working towards is an AI that doesn’t simply boost the wealth of the wealthiest but really works towards creating jobs?
PM: I’ll kick this one off. This is something that we’re keenly focused on. We advance our chances of dealing with that balance properly with this hub, as opposed to in the absence of it. We are keenly aware of this. And you’ll hear in my remarks, I would suspect the same with [Eisgruber], we’re going to focus as well — not just on the potential displacement in the workforce — but also on the whole list of the betterment of humanity and our residents in the state.
I’m going to cite an example which doesn’t necessarily help us in Jersey, but to give you a sense of something, [an AI robot chemist] figured out how to oxygenate Mars. They did it in two and a half weeks — what would have required, researchers estimated, what would have been 2000 person-years to do this in its absence. We’re going into the great unknown in many respects. Putting a stark relief around the hub I believe advances our chances of getting all these balances right.
CE: I agree 100 percent on this, which is a reason why I’m so enthusiastic about this partnership. AI is going to develop, it will disrupt our world, it will make exciting things possible. We need to be able to do it the right way and we’re much more likely to do it the right way with partnerships like the one we’re talking about here. But you’re absolutely right to raise those policy questions. That’s going to have to be a concern of the state and hopefully we can help with the kind of policy analysis that’s done at the Center for Information Technology Policy (CITP).
DP: Have you thought about specific job retraining programs related to AI and retraining workers on how to use AI at their jobs?
PM: I think we are the only state in the nation right now that has said we’re committed to training every one of our state employees in generative AI. It’s [in an] early stage, our chief information officer is with us today. We have around 61,000 employees, and we’re committed to training each and every one of them. We’re using some nonprofit grants. I know Google, thanks to Eric Schmidt (and others) who has big Princeton roots, are at least one of the parties that’s helping us fund that. We’re absolutely committed to that.
DP: I want to ask you a specific question. Princeton prides itself on having an undergraduate-focused education. Is there a role for undergraduates in the center?
CE: Absolutely. What we’re talking about is a hub here. The hub is a collection of activities that are going to make a difference, bring together partners, in ways some of which we’ve already begun to develop. I talked about CITP. There are things like our initiative around large language models that are developing under Sanjeev Arora's [professor of Computer Science] leadership — and there are things that are exciting that will come out of this that are either on the planning board right now, or that will develop as a result of this partnership with the state.
In just about everything we do with our faculty, there are opportunities for undergraduate engagement. One of the other passions that the Governor and I share is not only for getting undergraduates engaged in this research, but for making New Jersey an attractive place where our undergraduates will stay after graduation. So we want this to be a place where people get engaged, and then we want this hub to generate opportunities within the state [and] keep our talent here in New Jersey.
DP: Last year, the Biden administration released its blueprint for the AI Bill of Rights, as I’m sure you know. They have five core protections: from unsafe or ineffective systems, from discrimination by algorithms from abusive data practices, the right to know that an automated system is being used, and the ability to opt out of such a system. Do you think that this is a good blueprint? How do we see this blueprint as being at play in this hub or at least the principles of it?
PM: The parameters of it are a good blueprint. We’re kicking the tires on this in New Jersey as well. I think that’s a fair framework, at least from which to start. I would expect that this will be debated, fleshed out, and tweaked as part of the activities that go on in the hub.
CE: I agree with that completely. This is the point where I want you talking to our subject matter experts, including the provost Jen Rexford about that, but also the people at CITP. I’m glad to see those questions on the table, but what makes me happy is that we have so many people here who are thinking about them in sophisticated ways.
DP: Are there plans to contribute to the development of AI-related policies?
PM: Certainly. We announced, a couple of months ago, a three-legged stool [defining] the broad landscape. One is economic development — a little bit of a riff on [Eisgruber’s] comment about keeping startup companies and jobs in Jersey. Secondly is delivering government services as efficiently as possible using artificial intelligence. That has a lot to do with the training we talked about a minute ago. Thirdly is the whole area of regulation, ethics, etc., and we’re focused on all three of those. Can we add to what’s been laid out elsewhere? The European Union just came up with pretty significant rules of the road. Absolutely. I would hope we would.
CE: I agree with that and as I’ve said, it’s one of the reasons why I think it’s so important for Princeton to be involved. I do want to underscore the tremendous upside as well. You’re perfectly right to focus on things like disruption and risk. But sometimes I think the conversation around AI gets entirely focused on those questions. The upsides are tremendous. There are examples ranging from outer space to human health.
If we’re going to have cures to Alzheimer’s disease, for example, we’re going to have to get AI looking at genetic material and other sorts of data. If we’re going to read some of the scrolls that our humanities departments have been unable to decipher, we’re going to have to let AI loose on those things. If we’re going to understand some of the social problems that are plaguing us right now, we can solve the ability to analyze data about what people are doing in society and the world is going to make a huge difference. So yes, we’re going to be focused on the policy questions and the disruptions and issues about guardrails, but we also want to move things forward in those exciting ways.
DP: The press release mentions collaboration with industry leaders and startup companies. How do you imagine those companies and those types of partnerships will look like?
PM: I can see any range of potential here. Part of the reason we’re going public on this at this stage — which is earlier than we normally do (I think it’s fair to say for Princeton as well) — is to pull out of the ether folks who are interested in collaborating [with the center]. The ground zero for this whole area is the Bay Area in California, and if you ask the folks out there why they are there, they’re there because of talent — that’s where the talent either was or has come to. One of the upsides for us in planning this center will be a new talent hub. Whether that comes in the face of academia, or corporate or startup, we’re open to that. We’ve already gotten interest coming at us, so I could envision any number of different partnerships.
CE: It’s one of the reasons we’re so excited about talent, as we were discussing earlier. When you asked about undergraduate involvement, talent is what we do, right? We concentrate talent on this campus with undergraduates, graduate students, postdocs, and faculty members, and I think we have an opportunity with the state’s partnership here to really leverage that as we go forward.
DP: Given the global nature of AI development, how does the AI hub plan to collaborate internationally and contribute to the establishment of ethical standards that can be used worldwide? Of course, if we have ethical standards here but they’re not being used in Europe or in China and other parts of the world, it doesn’t really matter quite as much. How do we think about thinking internationally with this hub?
CE: What I would say is the hub will create a nexus of activities. It will supercharge what we are already doing already around AI, bring in new partners in the way that the Governor has described, and help to create the kind of accumulation and interaction of talent that makes such a difference to what it is that we are doing. It’s that talent in particular activities within it that are going to have a bearing on the international question and all the other questions. So this will provide additional energy and additional visibility to projects that are going on through CITP, or the work that [politics professor] Jake Shapiro is doing to understand how social media networks affect interactions of people. I would expect instead of asking, ‘does the hub have a position on this?’ what you [will] be doing is seeing greater activity and people looking to New Jersey as one of the centers of activity in this area for all of these kinds of questions.
PM: I think the only Amen to all that I’d add is that you’ve not only got what’s happening in America versus Europe or Asia, you’ve got what’s happening among American states. It’s too early to tell whether Congress will act in the space, but we’ve got a real opportunity to be a thought leader, both internationally and across all American states.
Julian Hartman-Sigall is an assistant News editor for the ‘Prince.’
Please send any corrections to corrections[at]dailyprincetonian.com.
Correction: A previous version of this article incorrectly stated that Eisgruber accredited Laura Cummings-Abdo of the CITP when discussing development of the large language models. A previous version of this article additionally identified Sanjeev Arora as a professor in Electrical and Computer Engineering; in fact, he is a professor in the Computer Science department.