Skip to main content
Premium Trial:

Request an Annual Quote

Researchers Help Piece Together a View of The Future of Supercomputer Architecture

Premium

BERKELEY, Calif.--A new venture that involves a collaboration between the Lawrence Berkeley National Laboratory; the University of California, Berkeley, Computer Science Division; and Sun Microsystems could point the way toward the future of supercomputing. Known as the Cluster of Multiprocessor Systems (Comps) Project, it will focus on development of a networked cluster of off-the-shelf technology that the partners will assemble and evaluate.

The project will be based at the Berkeley Lab's National Energy Research Scientific Computing Center (NERSC). Lead scientists include Horst Simon, head of NERSC; William Johnston, head of the Berkeley Lab's Imaging and Distributed Computing Group; David Culler, a professor of computer science at the University of California who heads the school's Network of Workstations project and is also an NERSC researcher; and Greg Papadopoulos, Sun's vice-president and chief technology officer.

Comps is being launched at a time when the future direction of supercomputing architecture is unclear, according to the researchers. Current options include equipment with everything from a few powerful vector processors, to massively parallel units with thousands of processors, to networks of single-processor workstations, to Comps, which is a network of multiprocessor workstations. The processors involved run the gamut from off-the-shelf devices to very expensive custom equipment.

The initial Comps prototype will include three networked Sun symmetric multiprocessor computers: two eight-processor Enterprise 4000 servers and a third computer with two to four processors. To make the units function together as a supercomputer, special systems software and network technology will be developed. The researcher say the greatest challenge is overcoming communication delays among the various units. To address the problem, multiple 622 megabit-per-second ATM networks will interconnect the Sun devices. Another network technology, scalable coherent interface, will accelerate the progression of parallel programming tasks and the exchange of information among processors.

The project's outcome could have significant consequences for scientists, including bioinformaticians. According to Johnston, "It is our hypothesis that the Comps architecture will be capable of addressing both the traditional numerical computation and the rapidly increasing needs of the experimental science community for high-performance computing and storage. Further, all of these scientific computing activities can use, and will benefit from, a common and incrementally scalable computing system architecture that can be--as needed--widely distributed around the new high-speed, wide area networks."

Simon observed, "Clusters of symmetric multiprocessor systems have demonstrated their supercomputing potential. What we're now seeking to determine is whether Comps provides a price and performance advantage for NERSC users. Our approach with Comps is to network clusters of symmetric multiprocessor machines that use off-the-shelf, inexpensive processors. What NERSC needs to determine is whether this approach really can provide capability computing for a large national-user facility."

Culler added that many of the communications issues addressed by Comps are similar to those of the Network of Workstations project, in which 140 single-processor workstations are linked together. He and colleagues at the university are developing a package of tools to dramatically reduce communication delays. "The use of multiprocessor nodes is essential for very large systems and it opens up valuable opportunities for clusters of all scale, including higher transfer bandwidth using multiple network interfaces, better resource sharing, and new approaches to fault tolerance," he commented.

Meanwhile, at SC '97, a conference in San Jose, Calif., last month that examined supercomputing issues, one prominent computer scientist predicted that "vector-based supercomputers will vanish in the next five or 10 years." John Hennessy, professor of computer science and dean of Stanford University's School of Engineering, said microprocessors configured in a multiprocessor architecture are the future of supercomputing, propelled by both technical and economic factors.

"Most importantly, the price-performance gap between the two types of machines is closing at a compounding rate," he told attendees, claiming that today's scientific workstations offer more than twice the processing power per dollar of supercomputers. Soon the ratio will be five-to-one, Hennessy predicted. Another problem with supercomputers is their reliance on customized software, he continued. "The days when supercomputer users all wrote their own programs is gone. Today, everyone relies on vendor codes, and what vendors care about the most, how they make the most money, is by writing for the largest market," Hennessy argued.

As an example of future trends, he pointed to experimental architecture such as Stanford's Directory Architecture for Shared Memory (Dash), which shows that shared memory systems can be scaled up to the size of massively parallel machines, allowing networks of desktop units to function like a single, large computer.

In the end, Hennessy predicted, the supercomputer market will probably move to a combination of cluster-based architecture, exemplified by Comps, and distributed shared memory architecture, such as Dash. "The big question for the future of supercomputing is the tradeoff between centralized or distributed architectures," he stated, conceding that with current technology, the option of aggregating off-the-shelf components comes at the expense of speed.

Dramatic improvements in networking and communications capabilities are likely and will facilitate distributed computing, said Hennessy, but in the long run the best solution will be "to develop the right primitives, the right building blocks, so that we can build systems that work well running alone or in concert with an array of other machines."

Filed under

The Scan

US Booster Eligibility Decision

The US CDC director recommends that people at high risk of developing COVID-19 due to their jobs also be eligible for COVID-19 boosters, in addition to those 65 years old and older or with underlying medical conditions.

Arizona Bill Before Judge

The Arizona Daily Star reports that a judge weighing whether a new Arizona law restricting abortion due to genetic conditions is a ban or a restriction.

Additional Genes

Wales is rolling out new genetic testing service for cancer patients, according to BBC News.

Science Papers Examine State of Human Genomic Research, Single-Cell Protein Quantification

In Science this week: a number of editorials and policy reports discuss advances in human genomic research, and more.