Follow us on Instagram
Try our daily mini crossword
Subscribe to the newsletter
Download the app

U. computing center improves research capability with minimal environment cost

rendering_w
rendering_w

The University’s High-Performance Computing Research Center, a 47,000-square-foot facility opened on the Forrestal campus in 2011, has vastly increased the scale on which scientists can perform computational research. Capable of analyzing data sets from disciplines as diverse as astrophysics and genetics, the HPCRC’s modern design provides impressive computing power at a small environmental cost.

The HPCRC succeeds older, smaller computer server clusters located throughout campus, including a former hub at 87 Prospect Avenue. In addition to housing administrative infrastructure, which runs the University’s Internet network, the facility allocates about 70 percent of its capacity for use by the University community. Specifically, researchers conducting intensive computational studies can request access to the HPCRC’s computational power.

ADVERTISEMENT

At peak performance, the center’s several clusters can collectively produce over 400 teraflops of computing power, the equivalent of that produced by 400,000 personal computers.

To reach these speeds, the facility incorporates a wide variety of cutting-edge technologies. In most modern computers, central processing units perform the vast majority of calculations. But in recent years, a technology known as the graphical processing unit has become the hardware of choice for processing enormous sets of data.

Over the next few weeks, 200 GPUs manufactured by global technology company Nvidia will be installed in a new cluster within the HPCRC known as Tiger.

The combined power of innovations such as Tiger has given researchers the tools to combat problems of previously impossible scale. For instance, the HPCRC’s computers have processed large sets of gene expression data for computational biologists and modeled complex seismic scenarios for geophysicists.

Geosciences professor Jeroen Tromp, who directs the Princeton Institute for Computational Science and Engineering, said he believes demand for high-performance computing will continue to increase.

“Computational science and engineering is happening everywhere. More and more areas are recognizing that computer simulation is an integral part of modern science,” he explained.

ADVERTISEMENT

The power and accessibility of the new data center has dramatically increased the pace of data analysis for researchers such as astrophysics professor Anatoly Spitkovsky. Spitkovsky studies the interactions between subatomic particles and shockwaves produced by exploding stars.

Mapping the acceleration of these particles requires an enormous number of calculations, a task far beyond the scope of even the fastest desktop computers. But over the 12 months, Spitkovsky’s team has been modeling these systems — 100 billion particles at a time — in a matter of hours, without ever having to leave the office. Facilities at the HPCRC have also allowed Spitkovsky’s team to increase the size of the systems they study tenfold.

“The HPCRC has really allowed us to do larger simulations on larger scales. New machines are coming online and enabling a new kind of science,” Spitkovsky said. “If you have an idea today, you can go and simulate today, and you will get the answer today.”

In order for a research group to begin harnessing the HPCRC’s resources, users must first register an account on the PICSciE website. To request access to a large portion of any cluster, PICSciE requires groups to submit brief research proposals for approval.

Subscribe
Get the best of the ‘Prince’ delivered straight to your inbox. Subscribe now »

While the HPCRC may outperform its counterparts at other academic institutions, Princeton’s computational power is dwarfed by government-owned supercomputers. The likes of Titan, the Tennessee-based supercomputer recently crowned the fastest in the world, report speeds 10 to 100 times faster than the HPCRC.

But Tromp believes the University’s facility is perfectly suited as an alternative to often-overbooked government supercomputers.

“In-house, we have resources that are comparable at the 10 percent level to what you might find [at a national supercomputer],” he said. “That enables our faculty, students and postdocs to develop software that ultimately might become mature enough to take advantage of these national resources.”

But the cutting-edge technology housed in the HPCRC is not limited to the computational realm. Accompanying the heat-producing computers and memory banks are state-of-the-art cooling and electricity-generation systems.

Director of Research Computing Curtis Hillegas identified sustainability and reliability as the cornerstones of the HPCRC’s design.

“As we designed the HPCRC and considered the needs of the University, we focused very strongly on two themes: sustainability, to minimize the impact on the environment and to keep the ongoing costs to the University at a minimum, and redundancy to assure that the administrative applications such as email and Blackboard and SCORE are resilient to disruptions inside or outside the data center,” he said.

The bottom floor of the building contains large batteries to provide a backup source of energy in case of a power failure, and cooled 45-degree-Farenheit water flows through the computing area day and night to dissipate heat.

Some of that heat can even be harnessed by a gas-powered generator, which uses leftover hot air to generate energy and chill coolant water. When energy prices increase, as they often do during the summer months, the facility switches to the alternative power supply. When temperatures drop, outside air is harnessed to cool the computing center.

Combined, these innovations helped secure the facility a gold certification from the U.S. Green Building Council’s Leadership in Energy and Environmental Design program in 2012. The HPCRC is one of only nine such data centers in the world to receive this designation.