Skip to main content
Premium Trial:

Request an Annual Quote

The Megascale-Processor Machine: Does It Compute?

Premium
By Meredith W. Salisbury

When Wu Feng chose the topic and speakers for a session he chaired at the Supercomputing 2005 conference last November, he knew he’d have a lively discussion on his hands. What he didn’t predict was that debate would be so engaging that, 15 minutes into the coffee break, the presenters would still be going at it, the audience still rapt, with conference organizers closing in to end the session so they could get ready for the next event.

The session, a riff on the $6 Million Man TV show, was based on Feng’s challenge to six panelists: “How would you build a 6 million processor system?” Feng, a Los Alamos National Laboratory researcher who designed mpiBlast, says he deliberately chose the group for its diverse background: Gordon Bell from Microsoft is credited with bringing mini-supercomputing to the masses and represented the big-iron approach; James Taft hails from the NASA Ames Research Center, where he was involved in gearing up a supercomputer that now benchmarks as the third fastest in the world; from the distributed computing perspective came the University of Oxford’s Carl Christensen, who runs a grid climate simulation program à la the [email protected] model, and Virginia Tech’s Srinidhi Varadarajan, who runs a massive Beowulf cluster; Satoshi Matsuoka from the Tokyo Institute of Technology has a megascale project that keeps all of the processors in one self-contained machine; and Allan Benner from IBM was called in to talk about what Feng calls “the much-ballyhooed” cell processor from IBM, Sony, and Toshiba.

To throw a monkey wrench into the works, Feng says, “I never really defined what a processor is” — so panelists were duking out that definition in addition to their opinions on how (or whether) to build such a monstrous supercomputer. By the end of the session, “there wasn’t even close to a consensus,” he adds.

According to Feng, the main divisions were along the lines of people who thought building such a machine would be wasteful; people who said it was possible to build that kind of machine; and people who argued that this megascale computer already exists.

Microsoft’s Bell, Varadarajan from Virginia Tech, and IBM’s Benner represented the camp saying that while the 6 million processor system didn’t yet exist, it certainly could be built. Bell’s vision for this kind of supercomputer “was more or less self-contained, but looked like a grid,” says Feng. Benner also utilized the grid concept, but really took advantage of power distribution in his plan, which called for installing a 128-processor system in every school in America and connecting them to form the megascale supercomputer. Benner noted that a major advantage of his vision is that compute architects wouldn’t have to worry about cooling systems powerful enough to handle all 6 million processors. Varadarajan, meanwhile, said he would build one massive superprocessor composed of 6 million distinct processing units.

Would you count that as one processor, or 6 million? This, Feng says, is where several panelists really got into discussion. The cell processor from Sony, Toshiba, and IBM, for instance, is one general-purpose PowerPC processor with eight synergistic processors. “Is it one processor that has eight little helpers, or is it nine processors?” Feng asks. Even in day-to-day compute systems, this is an issue. The computer sitting on your desk probably has its own video card — does that count as its own processor? If your answer is no, consider this: “It turns out that there are a number of algorithms that run faster on a video card than on a general-purpose CPU,” Feng says. For certain kinds of code, demonstrations have shown that running them on a video card instead of on the main CPU increases speed so much that the video card is the equivalent of a 40GHz processor, he adds.

Satoshi Matsuoka from the Tokyo Institute of Technology relied on the flexibility of defining “processor” to argue that the megascale-processor system already exists, because he had ordered one. His machine, which was not yet installed at the time of the conference, had on the order of hundreds of thousands of general-purpose processors, according to his presentation. Added to that were FPGA, or field programmable gate array, processing units, as well as additional compute cards designed to be servants to the master processors. “That’s where he picks up the rest of the difference,” Feng says, “and gets up to the million-plus processor element.” When it’s completed, the machine will be a self-contained unit in a single machine room.

At the other end of the spectrum, one panelist who seemed to stick to the idea that components like video processors do not count as separate processors was NASA’s James Taft, who used his presentation time to question the logic behind building something as enormous as a
6 million processor system. He argued that such a resource would be so unwieldy that researchers would never be able to draw down its full computational power — so what’s the point of having one?

Taft’s objection, which Feng says is a valid concern that compute experts would have to take into account when approaching this challenge, was countered by a panelist who claimed that megascale processor supercomputers already exist, and that at any given time, hundreds of thousands of people are getting use out of them. Carl Christensen from Oxford, whose team uses the [email protected] globally distributed computing model, argued that supercomputers like this had already reached the million-fold processor level by lashing together processors from around the world. According to his presentation, the SETI example already has more than 6 million processors plugging away on its computational problems, and that represents just half a percent of the world’s PCs.

Because the session ran over, Feng says he had no chance to give closing remarks — not that there would have been any way to tidily sum up all of the presentations with any sort of harmony. Given the opportunity, though, Feng says he would have told the audience “that what you heard here today are very diverse and, some people would say, divergent viewpoints. They’re not [really] divergent, but different aspects of the same system.” Anyone working to design the 6 million processor system would do well to pay attention to all of the perspectives before starting, Feng says.

The Scan

More Boosters for US

Following US Food and Drug Administration authorization, the Centers for Disease Control and Prevention has endorsed booster doses of the Moderna and Johnson & Johnson SARS-CoV-2 vaccines, the Washington Post writes.

From a Pig

A genetically modified pig kidney was transplanted into a human without triggering an immune response, Reuters reports.

For Privacy's Sake

Wired reports that more US states are passing genetic privacy laws.

Science Paper on How Poaching Drove Evolution in African Elephants

In Science this week: poaching has led to the rapid evolution of tuskless African elephants.