You've got to love marketing trends in the world of high-performance computing. Unless, of course, you're stuck sifting through the hype to discern the best compute configuration for your lab or institute. Sometimes these trends become particularly slippery when categories of technology are stretched to the point where their very description seems to be at best a misnomer or, at worst, nonsensical. Many a brow has furrowed over a recent trend that has picked up considerable momentum: "desk side" or "personal" supercomputing. Manufacturers like Cray, Nvidia, SGI, and HP have all gotten on the bandwagon of marketing HPC solutions geared toward the individual researcher or small laboratory, which begs the question: Just what is "personal supercomputing"?
Andrew Jones, vice president of consulting at the Numerical Algorithms Group, a nonprofit HPC software and services solution group, has argued that by definition, a supercomputer is a computer with data processing power beyond that which is commonly available. "There is always going to be a class of computing power that is much bigger than anything that will physically fit on your desk because if you can buy something for $1,000 or $10,000 then there are going to be users that are prepared to buy hundreds of them for a million dollars," Jones says. "And there's always going to be something that is orders of magnitude bigger than what most people can afford but the cheap stuff gets more powerful." And that's the point: good old Moore's Law, along with improvements in hardware like GPUs, make that cheap and powerful stuff easily accessible for the rest of us.
Supercomputer manufacturer Cray, which has since its inception more than 35 years ago focused solely on massive, high-end supercomputers, unveiled the Cray CX1 personal super-computer a year ago. Unlike anything Cray created before, the CX1 can fit on or under a desk, draws its power from a standard 110 volt wall socket, requires no additional cooling infrastructure, and can house up to 64 processing cores. The price tag starts at roughly $9,500, but in July, Cray released an even cheaper version called the CX1-LC (light configuration). And in an obvious effort to secure a customer base of individual researchers and small labs, both machines come equipped with either the Windows HPC Server 2008 or the Linux-based Rocks+ operating systems.
Rico Magsipoc, chief technology officer of the Laboratory of Neuro Imaging (LONI) at the University of California, Los Angeles, School of Medicine, was tempted enough by the prospect of having a mini-supercomputer that he decided to purchase a CX1 unit for the researchers at his lab. Among other projects, LONI conducts cross-species brain function studies employing complex algorithms that result in computationally heavy jobs. These jobs not only take several hours to run, but researchers were often subjected to additional wait time in order to gain access to the lab's larger HPC resources. "We have quite an extensive HPC implementation in our facility here, but the problem is we have a large pool of processors for production, and a large pool of processors for development — but for the individual researcher, we don't really have resources," Magsipoc says. "With CX1 we were able to give a faculty member a machine with 32 CPUs, configured any which way, and he's now a lot more nimble and agile with his research."
Ultimately, eliminating the high cost of failure during the testing and tweaking phase of tool development is where Magsipoc sees the big win with personal supercomputing. Before purchasing their CX1, all tools had to be tested on the lab's development cluster, so if something turned out not to work as planned, it was costly in terms of energy usage and the time it took to queue up for access. Now, users can test tools to their heart's desire without even setting foot in the cluster room. "We have a number of researchers here and I can envision each of them having their own CX1 and then when their tools are ready, we can deploy them on our larger cluster," Magsipoc says.
GPU chipmaker Nvidia also has a desk-side supercomputer offering: the Tesla Personal Supercomputer, which is Linux 64-bit and Windows XP 64-bit enabled and comes loaded with a quad-core AMD or Intel CPUs, and up to four Nvidia GPU cards. Nvidia encourages users to construct their own units the same way gamers have done for years; still, preconfigured units are available through affiliated resellers, or one can just go to a local PC shop and have one built for you.
That's exactly what John Stone, a senior research programmer at the Theoretical and Computational Biophysics Group at the University of Illinois, has done to facilitate the development of molecular visualization software without having to fork over a lot of money or deal with queuing up for time at a cluster. Stone currently uses a preconfigured quad-core Linux PC with three Nvidia GPUs that is capable of more than a teraflop's worth of performance. "We actually bought that from a local PC vendor, had them assemble the machines for us, and we got three of the Nvidia GPUs," Stone says. "It was very easy and this is something a lot of people can do."
Stone's personal supercomputer serves as a very real replacement for a cluster of PCs, and lets his team of researchers avoid some of the operational hassles associated with clusters. Currently, his lab has 20 such GPU-enabled PCs specifically equipped for visualization tool development. Ultimately, he says that what the personal supercomputing hoopla should really point to is the fact that for the first time ever, there are commodity devices that are massively parallel and have the same aggregate floating point performance that just a few years ago would have required a room full of typical PCs or server class machines.
In the case of Nvidia's products, assuming you feel comfortable in the GPU programming world — which by steps is getting more and more attainable — you can get a level of performance that was not practical before with a machine whose form factor would fit on or under your desk. "This is very exciting to me because I develop molecular visualization and analysis software and this software is run by a range of researchers, many of whom are not computer experts or hardware guys. They are wet lab scientists, so they are not compute savvy beyond the point of using the software to do what they need to do," Stone says. "For them, the appeal is that they can very easily ask their local computer administration team and say 'I need a new computer that has four GPUs in it and I will be able to run my work 100x faster' which means they can do things they just couldn't even do before."
A few years back we might have called desk-side machines that were way beyond PCs' processing power 'technical workstations.' Whether they really count as supercomputers is beside the point — at the end of the day, making researchers aware of what's out there is what really counts. "I don't think there's anything wrong with the term 'personal supercomputing' if it successfully gets a whole lot more people making use of the compute power that's available," Jones says. "It's marketing, but it's perfectly valid marketing, aimed at an audience that would normally not go anywhere near large-scale supercomputers. ... HPC can do so much for people trying to do simulations and modeling that whatever we call it to get more people to using it, the better."