Skip to main content
Premium Trial:

Request an Annual Quote

Green HPC Has Arrived

Premium

Whether it’s a giant supercomputer crunching teraflops of data or the average Beowulf cluster running a sequence database search query, the same basic issues of power consumption and heat apply. The more powerful the compute system, the more watts your local power company has to generate to keep the system up and running.

It’s not just powering the cluster, either: all of those processors generate a substantial amount of heat, which has to be removed via elaborate cooling systems to prevent malfunction. Some estimates state that for every watt spent on compute power, there is one watt needed for cooling. According to the Environmental Protection Agency, data centers in the United States consumed almost 1.5 percent of the entire country’s yearly energy usage in 2006. Even if you’re not worried about the environmental impact, that’s one hefty electric bill — and a growing number of computing directors are starting to worry about that.

So the need for reducing a compute center’s energy footprint is clear. But because every computer system and installation varies tremendously from site to site, the two biggest challenges to green computing are collecting useful data and determining a uniform way for users to measure just how energy-friendly a facility is. Whatever the ratio, all of this power consumption has an impact on both the environment and an institute’s coffers.

The HPC Laughingstock

High-performance computing has always been about turbo-charged speed and muscular, high-throughput capabilities. The environmental and financial costs associated with keeping the beasts running are regarded as the price paid for performance. So how does one actually implement environmentally friendly, energy-efficient, high-performance computing? Well, if you asked this question to a room full of computer scientists or IT folks just three or fours years ago, you would likely be met with either blank stares or downright hostility, says Wu-Chen Feng, a green computing trailblazer and associate professor of computer science at Virginia Tech.

Many members of the normally forward-thinking HPC community were slow in recognizing that large-scale computing is not nearly as energy-efficient as it could be. Perhaps even more difficult to swallow was the idea that it was possible to do high-throughput computing effectively with a low-power system design. Feng began seriously pursuing low-power options for high-performance computing back in 2002, but the reception he had from colleagues could be likened to a modern-day Galileo situation. “I struggled for three to five years at computing conferences talking about low power and power awareness, and it was really only about a year ago that people were accepting it enough that I didn’t get a lot of wise cracks,” says Feng. “I had my head handed to me on a plate many times. People were so flabbergasted that something like that could even exist and be useful.”

Detractors would most often scoff at the idea that any truly low-power computing architecture would ever be powerful enough to meet real data center needs. Kirk Cameron, Feng’s green computing colleague and fellow Virginia Tech computer science faculty member, says that trying to drive the point home was an uphill battle all the way. “Originally people laughed at this stuff, that we would even consider power in HPC,” says Cameron. “But I made the case early on that you had to or else it was going to limit your performance because you couldn’t just keep mounting components in close proximity to gain performance without considering power at all.”

Gaining Acceptance

In an effort to promote awareness about low-power computing and also change the very definition of performance in HPC, Feng and Cameron launched the Green500 at the International Conference for High-Performance Supercomputing last year. Unlike the Top500, which ranks the world’s top 500 supercomputers solely by FLOPS (Floating-point Operations Per Second) performance, the Green500 list uses a different set of metrics to evaluate computer systems. “The Green500 tries to point to the fact that there are other metrics that are of interest to high-performance,” says Feng. “One might argue that the Top500 list is a really good indicator for top speed, but when you or I go buy a car, we don’t necessarily want the top miles per hour. We probably want better reliability, better efficiency, so both of those things are metrics that are more encompassed by the Green500 list.”

The Green500 looks at FLOPS per watt as the determining factor. So far, the initial version of the list ranks IBM’s BlueGene/L and MareNostrum supercomputers at numbers one and two, respectively. An official version of the list will be unveiled at the Supercomputing Conference this month. Currently, the list uses both a Linpack for FLOPS and the group’s own “Total Rated Power” benchmark, which is the maximum amount of electricity required to power the computer as well as the costs associated with cooling.

Feng recalls how he was regularly booed and hissed off various computer conference stages when presenting his energy-efficient cluster design architecture called Green Destiny. The initial Green Destiny design, conceived in 2002, was a 240-node unit the size of a phone booth that used 400 watts of electricity with a full load. (A standard 240-node Beowulf cluster consumes an average of about 36 kilowatts at capacity.) Feng’s architecture required no special cooling facilities or air conditioning, and could function in a dusty warehouse in 85 to 90 degrees Fahrenheit. The trick to Green Destiny was its highly energy-efficient Transmeta chips, which eliminated roughly 75 percent of the transistors contained in traditional RISC chips. But because of the embedded firmware on the processor intended to make up for the lack of transistors, the performance lagged behind Intel and AMD processors. However, the researchers were able to increase performance by more than 40 percent by tweaking the chips’ firmware, clocking Green Destiny at 200 Gigaflops on the Linpack benchmark — which would’ve placed it in the high 300s on the Top500 list at the time, Feng says.

The Green Grid

The computer industry took its first major steps toward addressing energy efficiency in high-performance systems computing with the launch of the Green Grid at the start of this year. The Green Grid is an industry-wide consortium comprised of 11 founding member companies, including AMD, IBM, Intel, and Microsoft, and membership is open to any company that wants to join.

“What drove us to this as a group was … the fact that data centers were doubling in size with no end in sight and power densities were getting so high that we as an industry, both end users and vendors, had to address this problem if we were going stay on the vector of a society that’s enhanced by greater computing throughput as opposed to inhibited by it,” says Larry Vertal, AMD senior strategist and Green Grid board member. The canary in the mineshaft for the industry was the increasing number of extremely high-density computing systems in data centers, he says. Across the board, users were on average populating their racks to about three quarters of their full capacity due to the power required to run the units and the cooling issues associated with them.

The group’s preliminary research found that most data centers demand three times the amount of energy actually required to power the equipment. The group offers suggestions for improving computer system design as well as data center architecture including floor layout, energy-saving lighting, and proper air conditioning installation.

The Pioneers

Not all vendors have been slow to catch on. In early September, HPC vendor SiCortex demonstrated that its SC648 cluster computer could be powered by eight bicycles connected to a generator with a professional racing team. While many vendors have adopted a green-oriented approach in some of their products and marketing, SiCortex is one of the few vendors actually offering low-power HPC systems. The bicycle-powered SC648 successfully analyzed the genomes of hundreds of insect pests to discover potentially related species that may have gone previously undocumented for the US Department of Agriculture. In addition to the relatively low-wattage chips, which keep heat at a minimum, these units also adhere to elementary physics by taking into account that hot air rises. While many data centers tend to be cooled from front to back, SiCortex designed the system with a vertical airflow that keeps cooling technology costs down because most of the work is done simply by regular air movement.

And in 2004, a start-up company called Orion Multisystems marketed a version of Feng’s low-power Green Destiny cluster design. The company was pushing a workstation model that came equipped with 12 processors using only 150 watts. At the time, the DT-12 model had a performance/power ratio roughly three times greater than traditionally designed server boxes. And even though Orion Multisystems went belly-up in 2006, the fact that two of the original Green Destiny clusters are still in use by Los Alamos National Laboratory researchers may be the best testament to their value. Jason Gans, a researcher with the biosciences division at LANL, uses the energy-efficient cluster to design nucleic acid assays for biothreat or human health agents. Prior to Green Density, Gans used a blade cluster with slightly higher performance. But cooling the high-density system was a serious problem that resulted in frequent failures. The bottom line for Gans is that reliability takes precedence over speed. “While the network is not that fast, the up-time is so much better, and the reliability and the heat dissipation is a huge improvement [over the blade system],” says Gans. “We’re very happy to have these low-heat-producing clusters because it makes my job of maintaining the things a lot easier.”  

The Software Side

In 2003, Feng helped develop a software solution that acts as an energy regulator for the processor. EnergyFit is an application-layer software technology that tweaks CPU speed and supply voltage on the fly in real time. It can be installed on anything from a laptop to clusters, as long as the processor is enabled with dynamic voltage and frequency scaling. It is totally transparent — a software layer that allows routine operations while reducing the energy consumption of the system by upwards of 70 percent. The LANL website claims that EnergyFit could potentially save hundreds of thousands of dollars in operation costs for a data center over the course of its lifetime. According to Feng, EnergyFit has undergone testing in numerous industrial and commercial settings.

Another way to approach the heat problem is by looking at the code. Phases in an application’s code that are very computationally intensive over short periods of time tend to run the hottest. Conversely, codes that are memory-bound and do not require much number crunching will run at slightly cooler temperatures, and portions of the code where the processor is simply waiting for I/O from the disk will cool tremendously, Kirk Cameron says. Tempest is a software application that allows users to determine which portions of code create the most heat on the processor. “What we’re doing is using the data from Tempest to feed back into a system that allows you to control your resource,” he says. “From a system administrator point, hopefully we could develop tools that would enable the system administrator to regulate temperature in some fashion.”

The code is currently available for download for non-commercial license from Cameron’s lab’s website or from Sourceforge. And while it has been said that whoever marries the zeitgeist will soon become a widower, the growing crowd aboard the green high-performance computing bandwagon indicates that this is no fleeting fancy.

Additional Resources

http://www.greenercomputing.com/
Covers environmental concerns relating to the IT community, including energy efficiency and equipment disposal issues

http://www.thegreengrid.org/home
The Green Grid’s official site, featuring research papers, downloadable metrics guidelines, and news

http://www.green500.org/Home.html
Home of the Green500 list, plus downloadable conference papers, talks, and tutorials courtesy of Feng and Cameron

http://www.80plus.org/
A forum sponsored by utility companies and the computer industry promoting energy efficient power supplies in desktops and servers

The Scan

For Better Odds

Bloomberg reports that a child has been born following polygenic risk score screening as an embryo.

Booster Decision Expected

The New York Times reports the US Food and Drug Administration is expected to authorize a booster dose of the Pfizer-BioNTech SARS-CoV-2 vaccine this week for individuals over 65 or at high risk.

Snipping HIV Out

The Philadelphia Inquirer reports Temple University researchers are to test a gene-editing approach for treating HIV.

PLOS Papers on Cancer Risk Scores, Typhoid Fever in Colombia, Streptococcus Protection

In PLOS this week: application of cancer polygenic risk scores across ancestries, genetic diversity of typhoid fever-causing Salmonella, and more.