SGI this week launched an “energy-smart” blade platform called the Altix ICE 8200 that it considers to be in line with the needs of bioinformatics customers, even though it has yet to sign on any such clients for the new system.
Deepak Thakkar, SGI’s bioscience segment marketing manager, said the system could attract the cost-conscious life science market in a time of soaring energy expenses.
“In terms of the power savings … most life science customers are showing that almost 40 percent of their HPC budget is associated with power cost,” Thakkar said.
The ICE platform enables researchers to reduce by up to 87 percent the overall power consumption by increasing the efficiency of the front end, where the power is actually absorbed into the system, Thakkar said. The technology can also save about 76 percent of the power at the rack level.
This saving is realized by the unit’s rack design, “which facilitates air flow in such a direction that it helps cool down the rack significantly,” Thakkar said. In addition, Altix ICE employs water-cooled doors that carry away up to 95 percent of the heat generated by the system.
ICE, which stands for Integrated Compute Environment, appears to be making good on its acronym. SGI estimates that the platform can save customers running a 10-teraflop system up to $53,000 on average in annual energy costs.
A single Altix ICE 8200 rack can hold up to 512 Intel Xeon processor cores and produce 6 teraflops, the company said.
Michael Brown, sciences segment manager at SGI, told BioInform via e-mail that the company is marketing the system to the bioinformatics sector, but it had not signed any bioinformatics users for its early-access program.
“We see biosciences as a $300 million to $400 million annual opportunity for SGI Altix ICE systems; about 8 percent of the overall market, which we estimate at $5 billion,” Brown said.
The list price for a 512-core rack is $350,000, SGI said.
Warm Response to a Cool System
SGI has several early-access customers for the system, who have praised the compact nature of the unit and its cooling features.
Matthew Bate, a professor of theoretical astrophysics at the University of Exeter, said that space- and cost-savings drove his department’s decision to try out a 128-core Altix ICE.
Previously, he said, his department lacked its own high-performance computing system and has for the past few years relied on the UK Astrophysical Fluids Facility, a shared 92-processor IBM-based system.
“We wanted to get a local machine on site and this seemed to be a cost-effective and quite a low power-consumption machine,” he told BioInform. “SGI, I think, [is] trying to reduce power consumption and space ... [and] we don't have excess capacity in our existing machine room.”
He added that the group had been discussing the element of heat output, another driver in its decision to try out the unit.
SGI has been offering customers a trial run with ICE. If they like the unit, they can purchase it after a period of about three months for about $100,000, according to David Wade, senior systems programming analyst at engineering firm General Atomics, an early-access customer for the system.
Wade said that General Atomics, a long-time customer of SGI, feels that it will get the equivalent power in one ICE server that it would get from multiple servers from other vendors.
“We see biosciences as a $300 million to $400 million annual opportunity for SGI Altix ICE systems; about 8 percent of the overall market, which we estimate at $5 billion.”
Under the Hood
SGI said that the system’s small footprint is due to a “double density” board dubbed “Atoka” and co-designed with Intel, which enables a single blade to be powered by two dual-core or quad-core Intel Xeon processors with up to 32 GB of memory.
SGI’s Brown said that the platform also offers a high level of integration that improves reliability and shortens “bring-up” time.
In addition, the system architecture, which has a ‘leader node’ per rack and overall system management node, enables parallel booting of large systems and straightforward system monitoring and management — “something that is rare with large clusters,” Brown said.
The unit also offers a high level of redundancy, including N+1 power supplies, N+1 fans, dual-plane InfiniBand, and diskless nodes, which are “designed to remove single points of failure while managing the associated costs,” Brown said.
Brown added that the system offers additional benefits for specific types of computing tasks. “For parallel applications that span multiple nodes — which is unusual in bioinformatics, but quite common in quantum chemistry, molecular modeling, and many other application areas — the synchronization of system overhead makes the systems more efficient when running real applications,” he said.
“Similarly, the use of RAID arrays that communicate to the individual nodes over Infiniband delivers higher I/O rates than each node would see if they used a local disk.”
That said, he conceded that different prospective customers are interested in different features of the new system.
“Some customers are extremely excited about a potential reduction of up to one-third in their power usage. Some are running out of computer room space. Others are excited about the ability to increase the number of nodes to thousands with minimal external wiring and system management complexity,” he said.
Even so, with no bioinformatics customers yet signed on, it remains to be seen if they too will be “excited” about ICE.