Skip to main content
Premium Trial:

Request an Annual Quote

No Flash in the Pan

Premium

Anyone at last year's SC09 supercomputing conference in Portland, Ore., could not have helped but notice the emergence of flash memory squeezing into the ranks among virtualization, cloud computing, and GPUs as technologies leading the way forward for high-performance computing. Flash is a solid-state storage technology currently integrated into everything from iPods and digital cameras to mobile phones and thumb drives, and it could not have found a more opportune time to be drawing attention to itself.

The gap between CPUs and I/O performance is only widening as the speed at which hard disk drives move data onto the CPU approaches its limit. Over the last few years, with a definite rush of momentum throughout 2009, a growing list of storage vendors that includes EMC, Fusion-io, IBM, Intel, Sun Microsystems, and Texas Memory Systems have all begun offering heavy-duty flash storage solutions with terabytes of capacity for large-scale computing applications. Flash is generally much faster and more energy-efficient than HDDs. These qualities appear to be only improving as semiconductor technology improves.

Dash

Many in the academic high-performance computing community also believe that the technology is now ripe for large-scale scientific computing environments. Case in point is the San Diego Supercomputer Center, which recently announced the rollout of Dash, the first supercomputer of its kind to employ flash memory technology for the acceleration of large, data-intensive problems, including genomics. The system uses high-performance SATA solid-state drives from Intel, 68 Appro GreenBlade servers, 48 gigabytes of direct random access memory per node, 16 shared "supernodes" with about 1 terabyte of memory each, and employs the vSMP Foundation software from ScaleMP to facilitate multiprocessing. Dash is capable of doing the kind of needle-in-the-haystack searching common in bioinformatics, but at a rate 10 times faster than a traditional spinning disk system would allow. Users of Teragrid, the largest open-access scientific discovery infrastructure in the US, will also be able to access Dash to kick the tires and assist with the development of application codes to take full advantage of flash memory.

Dash's developers were partly inspired to build the system after many conversations with their biotech neighbors about data management and analysis woes. "This kind of thinking for a data-intensive computer was actually partially originated by talking to genomics people on the 'biotech mesa' around UCSD — there are a bunch of biotech companies around there and at least a couple of them, like Illumina and Scripps Genomics, had approached us with the problem that they are drowning in data," says Allan Snavely, associate director at the SDSC. "They're filling their disks faster than they can actually process the information to make more room for new data, so we do see them as one of the drivers."

Snavely and his team have only just started to take advantage of flash memory for large-scale scientific computing. Dash is a prototype for Gordon (as in Flash Gordon, get it?), a monstrous flash memory HPC system slated to come online in mid-2011, thanks to a five-year, $20 million grant from the National Science Foundation. At rollout, Gordon will have 245 teraflops of total compute power, 64 terabytes of digital random access memory, 256 terabytes of flash memory, and four petabytes of disk storage, and be capable of data performance more than 10 times the speed of today's current HPC systems.

ReMIX

Others have experimented with using flash memory to accelerate DNA database searches in a smaller, cluster-sized scale. An early pioneer in the application of flash memory with bioinformatics is Dominique Lavenier, a professor of computer architecture at the French National Institute for Research in Computer Science and Control. Lavenier's ReMIX Project uses flash memory in conjunction with FPGA boards to perform a technique called indexing to accelerate searches within a pool of data, in this case GenBank. "Genomics treatments need fast access to data, especially when terabytes of data need to be scanned. So to improve the search time, data can be indexed — or pre-computed — in such a way that only a portion of data can be consulted," Lavenier says. "In that scheme, banks of data are not read sequentially from the beginning to the end, but many accesses are performed only on area of interest; but hard drives cannot do that efficiently since a random access takes typically 10 to 15 milliseconds whereas flash memory accesses can be performed in 20 microseconds."

Lavenier acknowledges that he may have been ahead of his time when he started developing ReMIX a few years back, when flash memory was still prohibitively expensive. But all signs point toward the price continuing to fall. "Things are changing, flash is become cheaper, so future plans are to consider available commercial FPGA machines which have the possibility to house large amount of flash," he says. "ReMIX was a prototype to demonstrate the concept, but we now have the expertise and are on the way to export it to industry."

FAWN

Researchers from Carnegie Mellon University and Intel Labs Pittsburgh recently demonstrated the energy efficiency of flash memory with an experimental cluster architecture called FAWN, or Fast Array of Wimpy Nodes. David Anderson, an assistant professor of computer science at Carnegie Mellon, and his colleagues built a 21-node cluster capable of up to 100 times as many queries as an HDD-based cluster — all while consuming less energy than a 100-watt light bulb. Each node is comprised of an embedded single-core 500 MHz AMD Geode LX processor board and a 4 GB compact flash card. "If you have thousands of genomes and you wanted to scan them all, then you might start looking at flash," Anderson says. "The question really comes down to the size of the data set, how much random access you do to the data, and how much bandwidth you need out of the hard drives, so conventional database workloads work exceptionally well on flash."

Flash advantage

Unlike HDDs, with their spinning platters and read/write heads, flash memory has no moving parts. This means access times are generally on the order of five times faster than the speed of HDDs, and they consume considerably less energy. Another frequently cited benefit is that the input/output operations per second performance of flash memory is usually constant during reading, unlike an HDD, which has to seek out the actual physical location of the data. Flash does, however, tend to be about 15 to 20 times more expensive on a per-gigabyte basis than HDDs, although savings on power and cooling bills might balance that cost out over time. Users might also run into legacy issues when trying to incorporate flash into their pre-existing, slower, disk-based storage architectures as storage controllers were designed with the needs and specs of HDDs originally, although there are plenty of workarounds and software support is catching up.

"All these bioinformatics applications are search-oriented and database-oriented, and any time you hear that, it's a perfect application for flash," says Bob Murphy, senior manager for Global HPC Open Storage at Sun Microsystems. The bioinformatics community is "so used to short-stroking disk drives, just using the outer part of disk drives and a lot of disk drives in parallel to get the IOPS up," he adds. "All the expense and power and heat that goes into that, they're able to replace that now with a couple of these flash devices and get even faster performance, get rid of all the disks and all the costs they've had to put up with in the past."

Critics of the flash memory often cite the issue of durability due to the wear-out of memory cells. While consumer-level flash memory devices permit roughly 10,000 writes per cell before becoming unusable, the newer breed of enterprise-grade flash units can handle roughly 100,000 writes. Researchers have also developed wear-level algorithms to help flash devices work around memory cell burnout. Some people actually see this finite number of writes as a rare point of reliability because, after all, HDDs can and do fail just as often and with much less predictability. "With flash, we know how long the service life is and it's very predictable, whereas a disk drive in your laptop or PC, that thing can go out any second now. It's random because it's mechanically based," Murphy says.

Filed under

The Scan

Interfering With Invasive Mussels

The Chicago Tribune reports that researchers are studying whether RNA interference- or CRISPR-based approaches can combat invasive freshwater mussels.

Participation Analysis

A new study finds that women tend to participate less at scientific meetings but that some changes can lead to increased involvement, the Guardian reports.

Right Whales' Decline

A research study plans to use genetic analysis to gain insight into population decline among North American right whales, according to CBC.

Science Papers Tie Rare Mutations to Short Stature, Immunodeficiency; Present Single-Cell Transcriptomics Map

In Science this week: pair of mutations in one gene uncovered in brothers with short stature and immunodeficiency, and more.