This article has been updated from a previous version to correct the processing power for the current leader of the Top500 list.
’s [email protected]
project was one of the first distributed life science computing projects when it kicked off in 2000 with the goal of joining hundreds of thousands of desktop PCs to simulate the folding behavior of proteins.
Six years later, the effort is leading the way once again in an emerging trend to harness the power of graphics hardware for high-performance computing.
This week, the initiative, led by Stanford’s Vijay Pande, released a beta version of the [email protected]
client that runs on graphics processing units from ATI Technologies. The release follows Sony’s announcement in August that it had developed a prototype of a [email protected]
client that will run on the upcoming PlayStation 3, which runs on the new Cell processor.
In a presentation to announce the launch of the ATI [email protected]
client, Pande said that the software running on ATI’s Radeon X1900 graphics card “can achieve almost 100 gigaflops per processor.” At that rate, he noted, 10 users running the client on GPUs could achieve a teraflop of processing power and 10,000 could reach a petaflop. By comparison, the highest ranking computer on the most recent Top500 list is 280.6 teraflops.
“If 5 percent of the people currently involved in [email protected]
are on GPUs, we would have a petaflop machine,” he said.
GPUs, designed to calculate and render millions of pixels hundreds of times per second, have seen enormous performance improvements over the last few years, driven primarily by the gaming market. But some in the IT community have seen an opportunity in using this processing power for more than playing Quake and Doom, and there has been a recent upsurge in the so-called GPGPU (general-purpose computing on graphics processing units) field.
Additional benefits of using GPUs for high-performance computing include the low cost of graphics cards — generally under $500 per card — and the fact that they are already installed in many desktop machines.
Now GPU vendors like ATI are jumping on board. Last week, the company announced its “Stream Computing” initiative — an effort to enable GPUs to work together with CPUs to solve complex computational problems.
In a presentation announcing the initiative on Sept. 29, ATI CEO Dave Orton cited bioinformatics projects like [email protected]
as one of several promising application areas for the approach. “ATI is moving from games to genes,” he said.
is “a great proof point” for stream computing, Raja Koduri, senior architect in the hardware design group at ATI, told BioInform
Koduri noted that because GPUs are built for graphics, “the programming model is different” than that of CPUs. Those who are interested in adopting GPUs for high-performance computing may require a bit of help in porting their code to the new platform, he said, and ATI is working with several partners to create an “ecosystem and platform” to enable that transition.
Indeed, the difficulty in programming GPUs may be the primary barrier to more widespread adoption in the HPC community to date.
Chris Dwan, a consultant with the BioTeam who tracks trends in high-performance computing, noted that while the benefits of GPUs seem obvious, adoption for HPC is still a relative rarity.
“If you could do something with these graphics boards that are in every machine that’s ever sold, you could definitely get ridiculous amounts of computation,” he said. “But my impression is that for the past several years, it’s always been just out of reach. It’s been a good idea that nobody’s been able to get working in a robust, reliable way. I’m not sure why that is, exactly, except the underlying math is hard.”
Wayne Huang, who worked on a project to implement the Smith-Waterman algorithm for nVidia graphics cards at Lawrence Livermore National Laboratory, told BioInform in December that GPUs can be very difficult to program.
“You really have to be able to formulate your problem in a way so that it can be accelerated on the graphics card, and that in itself is probably the most difficult part of the solution,” he said at the time [BioInform 12-05-05
Huang has since moved to Lawrence Berkeley National Laboratory, and his colleague at LLNL, Yang Liu, is still pursuing the project as “a hobby.” Liu told BioInform this week that while he hasn’t made much progress on the initial Smith-Waterman implementation, “we’re planning to evaluate what we have on the current GPU architectures and see how much of a performance gain we can get just based on the progress of the hardware.”
In addition, he said, “We’re also planning to look at a Cell implementation of Smith-Waterman,” as well as Blast and genome-assembly algorithms for the Cell. “GPUs are still kind of restrictive,” he said. “As programmable as they are … the architecture is still kind of restrictive for doing some of this parallel processing.”
“If 5 percent of the people currently involved in [email protected] are on GPUs, we would have a petaflop machine.”
The Cell — short for Cell Broadband Engine — processor, developed by Sony, Toshiba, and IBM, combines a general-purpose processor with multiple GPU-like coprocessors. In addition to its use in Sony's upcoming PlayStation 3, IBM is using the processor in its BladeCenter QS20 server.
Liu said that the Cell is easier to program than a GPU and that its heterogeneous architecture is more suitable to many bioinformatics problems, “but GPUs have the advantage right now in that they’re commodity and really cheap.”
ATI’s Koduri said that while the company hasn’t tested the performance of its GPUs against the Cell, “the data we [saw] from Stanford is that the GPU rates are two to four times faster than the Cell, and this is with a $200 to $250 card today. And if you see the progression of how the prices go — we have new cards coming, which means that the existing cards will drop in price — so you will see cards at $100 and below in 2007 producing a phenomenal amount of work units.”
GPUs also offer advantages over programmable hardware like FPGAs, Koduri noted, because they can offer comparable speedup for certain applications at much lower cost.
However, Martin Gollery, associate director at the center for bioinformatics at the University of Nevada at Reno and an expert on FPGA-based bioinformatics systems, noted that the cost savings only comes into play for computers that are shipped with high-end graphics cards. If you don’t have one, “then you may spend as much for a graphics card as you would for a low-end FPGA card, with a much smaller speedup,” he said.
Dwan noted that for users who have GPUs installed, the benefits may still outweigh any potential drawbacks. “Even if it’s slower than it would have been on the main CPU, so what? You’ve got one of these things free in every desktop — a little graphics board just built into them. So if they can get any utility at all out of them, it’s a win.”
As for [email protected], Pande is confident that the benefits will be obvious. “Over the last five to six years, we have made significant advances,” he said. Yet even with the significant computational power of 200,000 desktops, “we’ve reached certain limits, and I think GPUs are what’s going to be able to push us several orders of magnitude in the future from where we are now. That’s something I’m very excited about.”