During a talk on campus last spring, Eric Schmidt ’76, the executive chairman of Alphabet Inc., glowingly described how technological solutions and big data will soon remedy virtually all societal problems. It is a tantalizing idea — that all the human suffering we see in the world can be eliminated by predictive algorithms, powerful analytics, and global interconnectedness. As a computer science major, I can attest to the allure of Silicon Valley, and Princeton alumni no doubt can too. According to a Career Services report, 15 percent of the Class of 2014 pursued jobs in science and technology, up from 12 percent the year before. While it’s easy for a billionaire entrepreneur to put technology on a pedestal, I am wary of the triumphalist narrative surrounding the powers of technology and the assumption that technological progress is always inherently good.
There are certain objective metrics of human progress such as life expectancy, nutrition, and literacy. When technology helps these to increase, it should be celebrated. But these metrics do not account for more subjective metrics of life quality such as mental health, community bonds, happiness, and a sense of purpose in life. Studies have linked social media use to social isolation and depression. This is not surprising given the emphasis on culture and community in less technological societies.
Technology is a tool to improve society but shouldn’t be idolized. It is a means and not an end. Unless we think critically about what purpose we are building towards, we may find ourselves inventing things that would be better off not existing. History is rife with examples of technology going wrong or even being used to advance death and destruction, from the A-bomb to Zyklon B. Furthermore, there are some things that technology may never be able to do. I am skeptical that an app will ever cure depression or heal racial divides. We should not approach technology unquestioningly, but rather with a balance of both hope and realism about its limitations.
Technological progress has left a trail of unresolved societal side effects that require serious deliberation and real solutions. Technology, while increasing overall wealth, can also concentrate wealth in the hands of the few who control it. In parts of Silicon Valley, median home prices have increased 18 percent year-over-year, causing a homelessness crisis. Analysts have predicted that tens of millions of low- and middle-skill jobs in the United States will be lost to automation in the coming decades. Caught up in the hypnotic possibilities afforded by a computer science degree, it’s easy for highly educated workers to overlook the human impact of the industry that promises to do so much for the world.
Beyond economic side effects, technology has direct impacts on its users. Over fall break, I had a chance to meet with members of the Federal Trade Commission in Washington, D.C. Their work often involves protecting consumers from sophisticated online scams and devices that process user data without informed consent. This shows the arms race between enforcers and criminals and the tension between security and privacy that comes with a more advanced and connected world. Research conducted by University professors has shown how machine learning can incorporate human biases along the lines of gender and race. Well-meaning tech utopianists have proposed replacing human judges and parole officers with algorithms that predict criminal behavior, but these algorithms wind up manifesting racial biases implicit in their training data.
We are also becoming increasingly aware of the influence of technology on politics. Facebook’s treasure trove of user data allows fine-grained ad targeting, which both political parties (not to mention Russia) used in the 2016 election. Facebook and Twitter are technological vectors for the viral spread of misinformation and vitriol, while content recommendation algorithms that know you all too well increasingly control the articles you read and the videos you watch.
It is crucial that we all be informed about the potential issues in emergent technology. We should foster productive conversations about them in our workplaces, in public policy, and in the classroom. Rather than using college as a time simply to hone technical skills, pick up new programming languages, and make ourselves as marketable as possible to the Googles and Facebooks of the world, students in tech should take the time to form value systems, learn about the interaction of tech with law and governance, and reflect critically on the technology in our lives.
Thomas Clark is a senior studying computer science from Herndon, Va.. He can be reached at thclark@princeton.edu.