Skip to main content

The 2003 Genome Technology All-Stars


In all likelihood, you have seen the worst that the genomics industry has to offer. In the past few years, investors bailed, stock and company valuations dropped to all-time lows, and many startups failed along the way. But through it all, it’s become obvious that you just can’t stop good science. Which is why Genome Technology is delighted to offer its third annual All-Stars issue — a celebration of the scientists with the most outstanding accomplishments in the last year, as chosen by our readers. Our leaders hail from work in sequencing, proteomics, bioinformatics, databases, genotyping, and gene expression. Awards also recognize alliance-building skills, best technology and company or institute, and to cap it all off, the person of the year.

The All-Stars competition continues to improve. The process we use to determine the winners also evolves as we find new ways to keep the contest fair and representative of the whole evolving genomics industry. As usual, the contest occurred in two stages: a nomination round and then the actual vote. Several months ago, we encouraged readers to submit names and supporting comments for each of the 10 categories. When nominations closed, the GT team sifted through hundreds of entries to come up with this year’s stellar ballot.

For the voting stage we tried something different: using Hollywood’s beloved academy approach. We received acceptances from 101 industry leaders in various technology niches and disciplines whom we invited to judge the nominations. It was they, the GT All-Stars Academy, who made the final decisions in this year’s vote (see Academy listing, pg. 28), and we thank them for their time. BREAK From a crop of well-qualified nominess, the Academy diligently chose the 2003 All-Stars. Our past two years have seen several All-Stars win by landslides; that the races this year were often much closer told us that virtually all of these nominees are influential and highly regarded. BREAK This year’s All-Stars are every bit as bright and inspiring as any this industry has seen. Together, they promise to continue to shape genomics in ways that none of us might be able to predict today.

Genome Technology thanks the 101 members of the 2003 GT All-Stars Academy

Mark Adams, Case Western Reserve University
Ruedi Aebersold, Institute for Systems Biology
Christopher Ahlberg, Spotfire
Russ Altman, Stanford University
David Altshuler, Massachusetts General Hospital
Leigh Anderson, The Plasma Proteome Institute
Phil Andrews, University of Michigan
Jeff Augen, TurboWorx
Lee Babiss, Roche
David Baker, University of Washington
Lionel Binns, Hewlett-Packard
Tania Broveak, Electric Genetics
David Brown, Cellzome
Charles Cantor, Sequenom
Arthur Caplan, University of Pennsylvania
Andrew Carr, Amersham Biosciences
Howard Cash, Gene Codes
Marc Casper, Thermo Electron
Eugene Chan, US Genomics
George Church, Harvard University
Tim Clark, Millennium Pharmaceuticals
David Clemmer, Indiana University
Mark Cockett, Bristol-Myers Squibb
Peter Coggins, PerkinElmer
Dalia Cohen, Novartis
Francis Collins, NHGRI
John Cottrell, Matrix Science
Jamie Cuticchia, Research Triangle Institute
Dan Davison, Bristol-Myers Squibb
Charles DeLisi, Boston University
Nicholas Dracopoli, Bristol-Myers Squibb
Evan Eichler, Case Western Reserve University
Janan Eppig, Jackson Laboratory
David Fenyö, Amersham Biosciences
Jim Fickett, AstraZeneca
Michael Finney, MJ Research
Jay Flatley, Illumina
Claire Fraser, TIGR
Rainer Fuchs, Biogen
Terry Gaasterland, Rockefeller University
Nat Goodman, Institute for Systems Biology
George Grills, Harvard University
Sam Hanash, University of Michigan
David Haussler, University of California, Santa Cruz
Stanley Hefta, Bristol-Myers Squibb
Stephen Heller, NIST
Winston Hide, SANBI
Denis Hochstrasser, University of Geneva
Christopher Hogue, Samuel Lunenfeld Research Center
Arthur Holden, First Genetic Trust
Lee Hood, Institute for Systems Biology
Don Hunt, University of Virginia
Larry Hunter, University of Colorado
Peter Karp, SRI International
Jim Kent, University of California, Santa Cruz
Carol Kovac, IBM
Pui-Yan Kwok, University of California, San Francisco
Nathan Lakey, Orion Genomics
Eric Lander, Whitehead Institute/MIT Center for Genome Research
Frank Laukien, Bruker Daltonics
Pierre Legrain, Comissariat á l’Energie Atomique
Michael Liebman, University of Pennsylvania
Steve Lincoln,
Klaus Lindpaintner, Roche
Mary Lopez, PerkinElmer
Michael Man, Pfizer
Elaine Mardis, Washington University
Steve Martin, Applied Biosystems
Dick McCombie, Cold Spring Harbor Laboratory
Kevin McKernan, Agencourt Bioscience
John McPherson, Baylor College of Medicine
Jill Mesirov, Whitehead Institute
Joachim Messing, Rutgers University
Gene Myers, University of California, Berkeley
John Nelson, Amersham Biosciences
Eric Neumann, Beyond Genomics
Francis Ouellette, University of British Columbia
Scott Patterson, Farmal Bioscience
Emanuel Petricoin, FDA
Dale Pfost, Acuity Pharmaceuticals
John Quackenbush, TIGR
Bruce Roe, University of Oklahoma
Jane Rogers, Sanger Institute
Keith Rose, GeneProt
Jonathan Rothberg, CuraGen
Lee Rowen, Institute for Systems Biology
Steve Salzberg, TIGR
Maciek Sasinowski, Incogen
Jeffrey Skolnick, University of Buffalo
Lloyd Smith, Third Wave Technologies
Richard Smith, Pacific Northwest National Laboratory
Todd Smith, Geospiza
Michael Snyder, Yale University
Bob Strausberg, TIGR
Donny Strosberg, Hybrigenics
Shankar Subramaniam, University of California, San Diego
Craig Venter, The Center for the Advancement of Genomics
Friedrich von Bohlen, Lion Bioscience
Keith Williams, Proteome Systems
Rick Wilson, Washington University
John Yates, Scripps Research Institute

The first through fifth placers:

Most Outstanding in Sequencing Technology
1. George Church, Harvard Medical School
2. Eric Green, NHGRI
3. Elaine Mardis, Washington University
4. Carl Fuller, Amersham Biosciences
5. (tie) Steve Quake, California Institute of Technology
Thomas Roth, 454
David Schwartz, University of Wisconsin

Greatest in Gene Expression
1. Pat Brown, Stanford University
2. Joseph DeRisi, University of California, San Francisco
3. David Sabatini, MIT
4. John Quackenbush, TIGR
5. Stephen Friend, Merck

Most Innovative in Bioinformatics
1. Lincoln Stein, Cold Spring Harbor Laboratory
2. Webb Miller, Penn State University
3. Trey Ideker, Whitehead Institute
4. Michael Liebman, University of Pennsylvania
5. (tie) Karl Clauser, Millennium Pharmaceuticals
Steve Jones, Michael Smith Genome Sciences Centre

Most Prolific in Proteomics
1. Ruedi Aebersold, Institute for Systems Biology
2. Mike Snyder, Yale University
3. Richard Caprioli, Vanderbilt University
4. Steven Gygi, Harvard Medical School
5. (tie) Hanno Langen, Roche
Richard Smith, Pacific Northwest National Laboratory

Superior in SNPs/Genotyping
1. David Cox, Perlegen Sciences
2. David Altshuler, Whitehead Genome Center
3. Debbie Nickerson, University of Washington
4. (tie) Arthur Holden, SNP Consortium
Steven Salzberg, TIGR

Database Doyen/Doyenne
1. Rolf Apweiler, European Bioinformatics Institute
2. Mike Cherry, Stanford University School of Medicine
3. Bob Strausberg, TIGR
4. Peter Karp, SRI International
5. Owen White, TIGR

Most Visionary Dealmaker
1. Jeffrey Trent, International Genomics Consortium
2. Lee Hood, Institute for Systems Biology
3. (tie) Noubar Afeyan, Flagship Ventures
J. Craig Venter, TCAG and IBEA
4. Friedrich von Bohlen, Lion Bioscience

Technology/New Product of the Year
1. RNAi
2. Applied Biosystems, 3730 DNA Analyzer
3. Fourier transform hybrid mass spectrometry
4. 454, whole genome sequencing technology
5. Affymetrix, SARS CustomSeq GeneChip

Company or Institute of the Year
1. Cold Spring Harbor Laboratory
3. (tie) Institute for Systems Biology
Novartis Institute for Biomedical Research

Person of the Year
1. Francis Collins, NHGRI
2. Claire Fraser, TIGR
3. Andy Fire & Craig Mello, Carnegie Institution and University of Massachusetts
4. Jim Kent, University of California, Santa Cruz
5. Phil Bourne, San Diego Supercomputer Center

Most Outstanding in Sequencing Technology: George Church, Harvard Medical School
Simply Exotic Sequencing

Had the GT All-Stars contest been around 20 years ago, George Church could have won as easily then as now: In 1984 he developed the first direct genomic sequencing method in Wally Gilbert’s lab at Harvard, he helped plan the Human Genome Project, and he was one of 11 participants in the first Human Genome Institute Workshop. Between those old days and now Church also helped found genome centers at Stanford, MIT, and Genome Therapeutics Corp., and took a faculty position at Harvard where he is now a professor of genetics at Harvard Medical School and director of the Lipper Center for Computational Genetics. Among his credits are the formulation of the concepts of molecular multiplexing and tags, homologous recombination methods, and an array DNA synthesizer.

To be sure, Church would be a good candidate for a lifetime achievement award in genome sequencing, but as far as we know, it was his current work, not his past, that earned him the 2003 GT All-Star Award for Sequencing.

The 49-year-old scientist acknowledges of his lab, “We’ve published a fair amount recently.” His team’s latest developments have earned widespread attention from other genomics scientists, some of whom have published follow-on research employing one particular new technology that has emerged from the Church lab.

The method is known as polony sequencing — a process by which PCR colonies, or polonies, generated from acrylamide-coated single molecules and grown on a glass microscope slide are spotted on a gel so that one DNA strand can be immobilized and annealed by a sequencing primer. High-throughput sequencing would be conducted by adding one fluorescence-labeled base at a time. (The polony technology was described in more detail in Genome Technology’s July 2003 issue, and Church himself was profiled here in April 2003.)

While the technology is already in use for exon typing, haplotyping, alternative splicing, and sequence tagging, Church and others are still working to refine polonies for sequencing applications. So far, they have succeeded in sequencing small numbers of bases with the technology. Church says his main focus now is on reducing the size of the polonies and shrinking the individual sequencing particles with beads. His original experiments relied on 100-micron particles; he has now achieved success sequencing with one-micron particles. Explains Church: “The benefit with one-micron particles is that you can fit 2 million per slide. With 30-base reads, that’s enough to cover one human genome.”

Researchers outside of Harvard who are now using the polony approach for applications including genotyping and expression profiling are Jeremy Edwards at the University of Delaware and Rob Mitra at Washington University, both former postdocs of Church, and Bert Vogelstein at HHMI. Edwards, whose lab studies systems biology, has published several papers employing the technology for gene expression profiling. “We’re interested in systems biology, and the technologies George is developing are perfect. No others provide the quantitative precision that polony technology does,” Edwards says. Plus, he adds, at only a couple of dollars per data point, it’s far cheaper than the expensive gold-standard expression profiling method, qPCR, which can run closer to $100 per data point.

Church says the polony approach has captured attention by being “down-to-earth yet exotic enough.” Indeed, that combination seems to be the secret to Church’s success. Edwards describes the lab where he spent one year as an environment that encourages wild creativity: “George has tremendous people come through his lab, and he gives people the freedom to explore their ideas.” And yet Church reveals his practical side when asked to describe his dream technology: He simply wishes for an inexpensive, high-resolution scanner that would let him transfer the data he generates into a computer.

— Adrienne J. Burke

Greatest in Gene Expression: Pat Brown, Stanford University
Brown’s Brainstorm

You might remember Pat Brown from high school: he was the one sneaking into the chemistry lab when no one was around to work on experiments of his own creation. In a way, he finds it hard to explain why science so appealed to him — his career in the field seems almost inevitable.

For a brief time in college, he thought about going into math instead. “But it was kind of like doing crossword puzzles for a living,” he says; he was looking for a way to help people. So he pursued an MD/PhD at the University of Chicago, earning both degrees by 1982.

Twenty-one years later, Brown, 49, is the two-time winner of the gene expression All-Star award and it’s safe to say that the microarray community collectively gives thanks that Brown didn’t stray into mathematics. But even without the math temptation, his path to being an array pioneer was hardly predictable.

Brown began his career with a postdoc position at the University of California, San Francisco, where he studied retroviral integration, trying to figure out “how retroviruses insert their DNA into a host genome.” Later, he was recruited by Stanford. In the course of his research, he developed a habit that has stuck with him through the years: “My favorite thing to do is just brainstorm ideas for new projects,” he says, pointing out that these projects are usually unrelated to what he’s working on. “Ninety-nine percent of them I never do anything about.” But one led him to dream up new concepts for high-throughput genotyping.

“One of the components of that, which I thought was trivial, was developing a way to mass-produce microarrays,” he says. “Once we started using them to look at gene expression, it was clear to me that it was a much richer source of information than [genotyping].” Gene expression data could really get at the root of diversity in a body, Brown says, pointing to the significant difference “between a T cell and a motor neuron or a muscle cell and a gastric epithelial cell.”

Ever since, Brown’s name has been synonymous with advances in microarrays. One thrust of his lab is developing tools and technologies to enable better studies of post-transcriptional gene expression. Also, “a lot of our work is … looking at cancer and trying to get at basic molecular mechanisms and how to diagnose [it].” Another research path that’s not too far along yet is figuring out where mRNAs go when they leave the nucleus — it’s Brown’s theory that each one has a very specific destination in the cell. All of these, he notes, are fairly wide-open research areas: “I’m drawn to projects that are exploring relatively unexplored territory, as opposed to working out the last details of something.”

Another key focus for Brown, though not one that’s playing out in the lab, is his role in helping launch the Public Library of Science, an open-access publisher of scientific journals. The first issue came out last month. “Every single thing we publish is free in the public domain — no password, no restrictions,” Brown says. He’s hoping this will help overcome what he sees as a major challenge to the industry: “The rate-limiting step really is integrating your new results with this enormous wealth of information that’s already been collected and published, but it’s extremely inaccessible and inconvenient to work with … by virtue of this anachronistic, absurd model for scientific publication.”

As for the array field, Brown says challenges lie in getting cheaper, better technology; more quality-controlled data; and tools that can look at proteins or macromolecules, not just RNAs. But he’s optimistic about these: “There’s a lot of progress being made,” he says.

— Meredith W. Salisbury

Most Innovative in Bioinformatics: Lincoln Stein, Cold Spring Harbor Laboratory

From Gamble to Google

Lincoln Stein’s foray into bioinformatics was something of a lucky gamble, and not even one he made for himself.

Stein, who started out in cell biology working on the reproductive system of a parasitic worm around 1986, stumbled across sequencing in the course of his project but balked at Harvard’s weekly mainframe charge to run the sequence assembly. “I had a Mac at home — 64K, black and white — so I set out to write my own sequence assembler.” He taught himself assembly language and “discovered that computers were kind of fun to work with.”

An MD/PhD student, Stein wanted to find “some kind of research that I could do that would involve both computers and biology.” He went through a slew of possibilities: medical informatics and patient records; radiology and image analysis; and medical education and teaching. None was exactly what he was hoping for.

Enter Nat Goodman, then informatics director at Eric Lander’s group at the Whitehead. Goodman met Stein, who was working as a programmer at the Brigham and Women’s Hospital and had come across a grant opportunity with the fledgling genome project for bioinformatics work. Goodman recalls being “immediately impressed” with Stein’s aptitude for informatics and helped him write the grant. Though Goodman thought it was an excellent grant application, he knew there was little chance Stein’s alternative approach would get funded.

“I really wanted to hire him but felt it would be unethical just to steal him away [from the hospital],” Goodman says. “So I made a deal with his PI: if she won the grant, she could keep Lincoln. And if she didn’t win the grant, then I got Lincoln.”

So in 1992 Stein started a summer internship at the Whitehead working on the mouse genetic map. Through his five years at the institute, he also helped on human mapping projects and the very beginning of the human genome sequence.

Goodman says Stein completely turned around the institute’s informatics world. “With a lot of ingenuity, Lincoln made tremendous innovative contributions that allowed us to produce very effective solutions with very few people and few computers.” Stein pushed the lab to use the Web and Perl long before either standard was established in the field. “He really created a lot of the methods that we take for granted today,” Goodman says.

Stein became informatics director at Whitehead and left in 1997, spending a six-month stint at CuraGen. Deciding that the private sector wasn’t for him, he left and kicked off 1998 with a new job in his home state at Cold Spring Harbor Laboratory on Long Island, NY.

Most of his time goes into developing genome databases, says Stein, 43. He’s currently working on the international HapMap project, which he expects will “enable researchers … to perform very high-resolution association studies to find the genes that are associated with diseases.” He also works on Wormbase, the Gramene Database for rice and monocots, the Genome Knowledgebase, the Plant Ontology Consortium, and has made a template “for people who wish to set up databases for model organisms.”

Stein, who sees data integration as the major challenge facing informaticists, says his dream is to have “the bioinformatics Google. You type in a gene name and it’ll tell you everything that’s known about it — all the papers published, all the clones, all the laboratory resources, all the pathways that it participates in, all the tissues that it’s expressed in.” But Stein says this isn’t the grand vision it sounds like: “I think in the next 10 years we will have something like that.”

— Meredith W. Salisbury

Most Prolific in Proteomics: Ruedi Aebersold, Institute for Systems Biology
Toward ‘Smarter Experiments’

Shortly after Ruedi Aebersold found out that he’d won this award for the second year in a row, he announced that in a year he’d be returning to his native Switzerland to take an academic position at ETH, the Swiss Institute of Technology.

But it’s not the publicity that is driving him back to his homeland.

Aebersold, who grew up near Bern, says he accepted the position at Zürich-based ETH because “it was just a great opportunity to build up some systems biology and functional genomics initiatives there.” Europe lagged behind the US in its contributions to sequencing the human genome, he says, and now research institutions on the Continent want to make up for that by launching major initiatives to help explain how the human genome functions.

Aebersold makes a good candidate to lead that type of initiative in Europe because of the success he’s had developing proteomics technology over almost 20 years in North America — particularly as a faculty member at the University of Washington and a co-founder of the Institute for Systems Biology in Seattle.

Sam Hanash, president of HUPO, says, “I have known Ruedi since his post-doc days. He has been throughout his entire career absolutely brilliant. He has made tremendous contributions to proteomics, the type that bring credit to the field.”

Throughout the past year Aebersold’s contributions to proteomics have remained steady. Supported in part by a $22 million ISB contract from the National Heart, Lung, and Blood Institute, Aebersold’s group has devised open source computer programs for rating the validity of protein identifications extracted from mass spectrometry experiments. The team has also developed new protein-tagging chemistries for measuring the relative abundance of specific protein complexes under various conditions, and has begun using MALDI tandem mass spectrometry to sequence only those proteins whose abundance varies in comparison to a reference sample.

“Right now, we just sequence a lot of stuff which isn’t really interesting, because it’s just basically a shotgun approach,” Aebersold says. “But if we can quantify first, we can then make decisions [as to] what’s interesting — what to sequence — and I think that increases the efficiency of the whole process. It also allows us to home in on specific types of proteins which show a particular behavior, and so basically we can do much smarter experiments.”

His group has also begun working with the Fred Hutchinson Cancer Research Institute in Seattle in an effort to identify biomarkers for cancer in human serum. Aebersold published a paper earlier this year describing his approach to separating glycosylated proteins from serum with potential as biomarkers, and has started proof of principle studies in mice before advancing into the clinic.

In his new position at ETH, Aebersold says he’ll continue to tackle problems in proteomics and systems biology, including what he sees as an overall lack of access for many scientists to the technology and instrumentation necessary for performing large-scale protein and gene analysis. And his departure from ISB won’t be the end of his relationship with the institute he helped establish: “The goal is to keep the impact on ISB minimal,” he says.

In the near term, Aebersold will be busy making preparations to establish a new lab in Switzerland and finding someone to replace him at ISB. Despite the many open academic positions in the field, his current position should be attractive to others “because the lab is really well equipped, it already has good people there, and we have this high-throughput proteomics facility which is running really well that will stay there.” He’s got a year to get everything settled, he adds, “but it will be complicated.”

— John S. MacNeil

Superior in SNPs/Genotyping: David Cox, Perlegen Sciences
Variation Veteran

David Cox’s career in medicine started out earlier than most. Raised in Oregon, he was recruited from his public high school by Brown University “as an affirmative action person for geographic diversity,” he says. Brown’s program was a joint undergraduate/medical degree — “so here you are, a high school senior, and you’re already admitted to medical school,” Cox says.

His pediatric residency at Yale proved to be what most shaped his career afterward. “I realized when I graduated from medical school that we didn’t understand how anything worked,” he says. So on his first day in the kids’ ward, “I really thought that the jig was up. … At nighttime those babies would be dead and everyone would know that I didn’t know anything.”

When the babies were all alive and well that night, Cox realized, “I didn’t have to know how everything worked. That’s how human complexity can be dealt with. … You can still have an immense amount of control without a complete understanding of how it works.”

And that lesson has gotten Cox, now 57, through more than a quarter-century of studying genetics and human variance. An early proponent of studying variation — long before anyone really knew what a SNP was — Cox headed to the University of California, San Francisco, and latched onto genomic research.

In 1986, Cox says, “a very important thing happened.” Rick Myers arrived at UCSF and started what Myers calls a “joined-at-the-hip” collaboration with Cox. Together, they won a grant to set up one of the first genome centers, which moved with them when they decided to head to Stanford in 1993.

But when the draft was announced in 2000, Cox’s heart sank, he says. After 25 years of hard work, the genome sequence was finally available — “but how many more decades would it be before we had multiple sequences of the genome to see where the variants were?” he wondered.

Surprising his peers, Cox took a sabbatical at Affymetrix when Stephen Fodor convinced him to help develop a technology that would enable the sequencing of dozens of genomes. It was a success, and Cox spun off Perlegen Sciences in 2001.

Now CSO, Cox says Perlegen’s wafer technology is being used both for association studies and for efficacy studies for pharmaceutical companies. Cox says that the company has gotten costs down to two cents per genotype, making realistic his 50-genome dream.

In some ways, these studies cap what has been a focus for Cox for almost 15 years. As early as 1989, Myers says, Cox was talking about linkage disequilibrium and how to understand variation. “He really had a bee in his bonnet about this for a long, long time,” says Myers, pointing out that Cox was a practicing clinician for most of his career. “He’s really dedicated to trying to figure out how to use genetics to solve human diseases.”

The problems facing the field are what Cox sees as contentious debates on how to progress. Arguments about studying rare versus common SNPs have been “extremely divisive. … They both are important,” he says. The other major dilemma is what Cox calls a “social challenge”: figuring out how to divvy up scant resources to study certain diseases or populations.

Cox is proud of the technological feats that finally allowed the grand-scale resequencing that he’s been seeking for 25 years. But always, he goes back to those babies at Yale — and knows that Perlegen’s wafer isn’t solving their problems. “No one declares victory” with the technology, Cox says: it’s all about studying variation and figuring out how to apply the genetic knowledge.

— Meredith W. Salisbury

Database Doyen: Rolf Apweiler, EMBL
The Accidental All-Star

To hear Rolf Apweiler tell it, he might as well have won the All-Star for “most unqualified.” Now an undisputed database guru, Apweiler’s beginnings in the field were decidedly less noteworthy.

As an undergrad biology student at the University of Heidelberg in 1987, Apweiler saw a piece of paper on a blackboard in class one day. It was a job advertisement from EMBL looking for, Apweiler recalls, “a person who had excellent knowledge in biology, good knowledge of computation, and a good command of English.”

He didn’t quite measure up. “English was my worst subject in school, I had never before seen a computer,” and of course he hadn’t even secured a biology degree. He applied anyway, and wound up being the only person nervy enough to do so. He was hired as a part-time curator for the Swiss-Prot database.

It’s possible that Apweiler’s status as a novice gave him an edge. In the early days of Swiss-Prot, there was virtually no protocol anyway, and Apweiler’s fresh perspective gave him a chance to figure out how the database ought to work. “There was not a structure yet,” he says. “You started to think about how to structure the data better, how to control the vocabulary better. Things you did wrong at the beginning, you had to fix at some stage — you had to learn the hard way.”

Now 40, Apweiler is still with EMBL at its Hinxton, UK, outstation, where he heads the sequence database group. He has beefed up his credentials since the early days, adding a PhD from Heidelberg to his biology bachelor’s.

His main project remains Swiss-Prot, and overseeing its merger with Trembl to form UniProt. For that task, he has worked to “make a lot of changes toward the best practice of each database” and to create a single pipeline for the data to ensure more uniform handling.

The native of Aachen, Germany, also works on InterPro, the Gene Ontology project, and efforts to handle protein analysis data and splice databases. Apweiler says he tries “to devote as much time as possible now to the new databases” that he expects to come from the proteomics field. “For sequence, I wouldn’t say it’s perfectly under control, but we have procedures for how to handle that. Proteomics data … still needs a lot of basic work.”

Graham Cameron, associate director of the EBI, credits Apweiler with “having an eye for what needs to be done and getting in there first to do these things.”

The source of the next major data wave will be mass specs, Apweiler says. “Protein precursors, different protein products — organ to organ, tissue to tissue — all that leads to an incredible variety of proteins coming out of a relatively low number of genes.”

That challenge, like the others he points to in managing databases for the next several years, is nothing new. Handling the sheer volume and quality assurance of data will always be an issue, Apweiler says, as will “integrating all this data in better and more clever ways.”

Another problem he points to: convincing funding agencies to fork over enough money to properly curate data. Generating the data is extremely expensive, he says, but that’s wasted money “if you don’t store and make it available in a good way.”

Despite the financial challenge, Apweiler feels his work is worthwhile. “All this data from the sequences is raw data and predictions,” he says. “If you want to make predictions, you want to make [them] against the most reliable data sources. … We put a lot of work into good annotation of this data.”

— Meredith W. Salisbury

Most Visionary Dealmaker: Jeffrey Trent, Translational Genomics Research Institute
Building Bridges

Though peers selected Jeffrey Trent as the most visionary dealmaker, Trent himself was somewhat reluctant to accept the award, concerned that the word “dealmaker” isn’t befitting a diehard public-sector figure.

Not to worry, Jeff. As GT voters knew, Trent’s Arizona-based, nonprofit TGen initiative raised an initial $92 million and is still going strong since its inception in July of 2002 as a partnership among three of the state’s universities.

Trent’s career began around 1980 after he earned his PhD in genetics from Indiana University, where he studied cytogenetics to understand changes in cancer. He followed that up as a faculty member for about 10 years at the University of Arizona, but eventually left to join Francis Collins’ new genome center at the University of Michigan. About four years later, Collins was off to run NHGRI, and he “asked me to start and develop the intramural program there,” Trent says. He remained director of that program until the start of this year.

“Believe it or not, it took some consortium building there, too,” Trent says of his start at NHGRI. He immensely enjoyed the position, launching the NIH’s first new intramural program in two decades or so and growing it to about 500 people. But it was no easy path. “The NIH is essentially a zero-sum game. If there’s a new institute, people are looking at you as you’re taking a percentage of this fixed pie,” he says. Trent’s ability to strike alliances worked out well all around: Of the 18 intramural programs on the main Bethesda campus, genome institute faculty published papers with all but two — helping to “permeate genomics tools across the institutes,” he says.

Now 50, Trent says he was considering the 15 remaining years of his career and how to frame them around “trying to move some of the technology for discovery-based research into the clinic.” Longtime friend Dan Von Hoff had developed a clinical trials program in Arizona, and Trent thought that might be a perfect match for the discovery engine he hoped to get started.

At the same time, Arizona was examining ways to establish its presence in biotech. Trent was recruited to lead what was then a unique concept: a nonprofit research institute with incredibly high-throughput technologies linked to Arizona State University, the University of Arizona, and Northern Arizona University.

Trent leapt in with enthusiasm, relying on his experience in forming partnerships to head up TGen. Early evidence suggests that he’s been successful. The Pima Indians, a Native American group with the highest rate of diabetes in the world, contributed $5 million to the medical program. “It’s pretty remarkable to have that kind of partnering relationship with one of these communities,” Trent says.

Phoenix itself pushed forward with plans for the TGen building, a $42 million research haven that will be ready to go in December 2004. Another triumph came in teaming up with Mexico’s new genome institute, and Trent is also excited about the pathogen program his institute just launched after reeling in Paul Keim, whose history in anthrax studies makes him one of the leading pathogen experts in the country. And a deal with IBM has given the institute, Trent says, “the largest supercomputer dedicated to genomics” — expected to be around #35 or #40 in the next rating of the top 500 supercomputers.

Despite the successes, Trent isn’t relaxing. When he thinks about his biggest challenge, “sustained funding is probably the easiest to point to,” he says. “But I think sustained relationships — establishing [partnerships] and maintaining those in a way that you not just promise but you deliver — is going to be the challenge going forward.”

— Meredith W. Salisbury

Technology/New Product of the Year: RNA Interference
The Silent Treatment

Little surprise here. RNA interference, the hottest thing since PCR, is the hands-down winner in the Technology of the Year category.

As GT described in a four-part series on RNAi earlier this year, the technology works to silence or knock down the expression of a gene when a double-stranded RNA inserted into a cell deactivates the gene bearing its complementary sequence by triggering the destruction of its messenger RNA. A subsequent discovery showed that a small, 21-nucleotide duplex, or short interfering RNA, could be used to silence a gene in a mammalian cell when it is incorporated into a protein complex that targets and cleaves mRNA.

University of Arizona’s Rich Jorgensen first stumbled upon the effect, which he dubbed cosuppression, in 1990 when he inadvertently silenced genes in petunias while trying to deepen the purple color of the flower by genetic modification. Then, five years ago, Andrew Fire and Craig Mello, of the Carnegie Institution and University of Massachusetts Medical School respectively, demonstrated this effect, which they called RNA interference, in C. elegans. And finally in 2001, Tom Tuschl, now of Rockefeller University, succeeded in inducing silencing by the method for the first time in a human cell. The knockdown method is now touted as the single best way to study how genes work.

Today, more than 20 major US research institutions have laboratories carrying out RNAi experiments, and more than $24 million in government grants has been awarded for RNAi work in recent months. A sampling of the grants reveals the diverse applications for the technology: $12.1 million for “RNA Interference as a Weapon Against Bioterror,” $2.2 million for “Expression of Anti-HIV siRNA in Blood Cells,” $1.7 million for “Gene Silencing as an Anti-Viral Defense in Animal Cells,” $1.6 million for “RNAi Mediated Suppression of Hepatitis B Replication,” and $1.5 million for “Functional Genomics Screen Using RNAi in Drosophila.”

In fact, RNA interference has triggered the emergence of a subsector to the genomics industry. Several companies are making a business selling synthesized short interfering RNAs. Others sell vectors for delivering the reagents, transfection or labeling tools, or software for selecting the 21-base-pair sequences. And, of course, a new crop of biotechnology companies, including Alnylam and Sirna, aim to turn siRNAs themselves into therapeutics.

Asked if perhaps some people are overly optimistic about the promises of this technology, Doug Conklin, who conducts RNAi experiments in the Gen*NY*Sis Center for Excellence in Cancer Genomics at the University of Albany, told our sister publication RNAi News (yes, there’s even enough happening in this field to fill a weekly newsletter): “Definitely. Definitely. I certainly think some people are. However, I think a really realistic view of it shows that it’s a vast improvement over what we’ve had. It’s not going to work in all instances in all people, it’s not going to work in all genes, it’s not a universally applicable approach to everything, but it’s … a more powerful tool than what’s come down the pike in a while.”

Rockefeller’s Tuschl acknowledges that there are “minor problems like specificity, off-target activities, [and] interference of an siRNA with the natural regulatory mechanisms that may require a little bit of sequence optimization, but these are do-able things that we understand fairly well at this point.”

Leaders of the field see a bright future for the technology as both a research tool and as a therapeutic. Tuschl predicts that in 10 years every institution will have a set of validated siRNAs in the freezer. “Then,” he says, “all the cell biologists have to do is to develop a reporter system that faithfully mirrors the process [they’re] looking at, then [they’re] going to knock out every gene in the human genome and see which gene affects the process that [they’re] studying.”

As for the outlook on RNAi therapeutics, Craig Mello said in a recent public forum that it would be “reasonable to expect potential therapies in the near future.”

— Adrienne J. Burke

Institute of the Year: Cold Spring Harbor Laboratory
Long Island’s Genomics Gem

With James Watson serving as its director for 26 years, it should be no surprise that Cold Spring Harbor Laboratory on Long Island, New York, is closely associated with recent advances in genomics. In fact, according to director Bruce Stillman, the lab’s history of research in genetics stretches back a century to the rediscovery of Mendel’s laws of inheritance in 1900. “There’s always been that strong genetic focus, going back through the phage genetics days, and more recently mammalian genetics,” says Stillman, an Australian-born expert in DNA replication. “I think that influences a lot of people’s thinking around here.”

Cold Spring Harbor has certainly had its share of star genomics scientists in recent years. Aside from Watson, who continues as the lab’s president, the institute is home to RNA interference pioneer Greg Hannon, Arabidopsis expert Rob Martienssen, and cancer genomics researcher Mike Wigler, among others. When listing these and other names as the reasons for nominating Cold Spring Harbor for this award, one GT reader wrote: “Need I say more?”

Stillman justifies the accolades Hannon has received by pointing to his discovery of hairpin short-interfering RNA molecules, which he and other researchers have used to “essentially knock out” genes in animals and humans, Stillman says. Hannon’s work has led to experiments with epialleles — alleles of a gene that specify a gene’s level of protein expression — as well as to an effort to create siRNAs capable of suppressing the function of almost every human gene. “This will be a very powerful screening technology for mammalian genetics,” says Stillman.

Wigler’s primary contribution to cancer genomics is a technology he developed for taking breast cancer tumor biopsies and scanning the entire human genome for copy number alterations, Stillman says. By comparing the number of alterations between tumor samples and healthy tissue samples, Wigler and his group have identified novel oncogenes useful for understanding the mechanism of the disease, as well as potential drug targets.

A number of labs at Cold Spring Harbor have also begun to investigate the link between gene silencing and RNAi at the level of heterochromatin, the portion of the chromosome that remains transcriptionally inactive in the intervals between cell divisions. Martienssen’s lab has taken a lead role in the last 12 months in helping to explain this mechanism, Stillman says, along with Shiv Grewal, a former associate professor at the lab.

Besides the caliber of scientists making up its research faculty, Stillman says Cold Spring Harbor stands apart because its mission is so broad in scope. Since its founding in 1890, the lab has sought to provide a haven for both biological research and instruction. About 8,000 scientists visit the lab every year, he says, to attend meetings or to take courses through the lab’s educational branches. In addition, the lab has its own publishing arm, which puts out biology books and journals, including Genome Research and RNA. “There’s a lot of institutions that of course do research, but there’s no institution that I know of that has the breadth of programs that Cold Spring Harbor has,” Stillman says.

Looking to the future, Stillman says the lab will continue to forge new research relationships with clinicians as a means of expanding the applicability of science to medicine. The lab has also made a priority of adding mathematicians and bioinformaticians to its staff to complement its strength in bench work, Stillman says, and further expand the lab’s science. The bottom line, he adds, is that “we try not to limit what the scientists can do.”

— John S. MacNeil

Person of the Year: Francis Collins, NHGRI
Genomics’ Greatest Optimist

Earlier this year, Francis Collins beamingly addressed nearly 1,500 people, calling it a “historic day” as he formally announced the completion of the human genome. Our All-Stars Academy thought it was worth more than a day, and recognized the accomplishment by voting Collins Person of the Year.

But for a lack of clear genomics leaders 10 years ago, Collins might not be gracing this magazine’s cover: NHGRI was not where he pictured himself.

Collins’ background and PhD are in quantum mechanics, but he was looking for more relevance to humans and wound up earning his MD — and getting his first exposure to genetics. Continued research in the field made it obvious to him that “tracking down diseases at the biochemical level seemed like something that we might never be able to do,” he says, “but it seemed like it was worth trying.”

By the late ’80s, Collins headed to the University of Michigan, where he started up a lab and directed one of the first six genome centers. In 1993, Jim Watson left the genome project. “There was great uncertainty about whether [the HGP] was going to succeed, and nobody at the helm,” says Collins, now 53. It wasn’t looking good — and that’s when he got the call asking him to come to be the center’s new director. But Collins turned down the offer.

“I was having a very good time in my own research lab in Michigan, I was a Hughes investigator,” he says. “The idea of becoming a federal employee seemed like the one thing that was never on my life plan.”

Collins regretted the decision, realizing what a once-in-a-lifetime opportunity it had been. So when the phone rang some three months later with the same offer — no one else had elected to lead the effort — he jumped at it. In April of that year, he took on the directorship he still has today. His role became immediately clear: cheerleading and recruiting, he says. Coordinating the effort was no small task, and he found himself taking “very bright people used to competing with each other and convincing them to work together.”

Bob Waterston, who then headed the Washington University genome center, says Collins was known through the genome project for being “unstintingly optimistic.” He got everything to work by dint of enthusiasm, focus, and being “a good listener,” Waterston says. “I think that makes him quite effective at being able to get … things done … and at getting people to interact with one another.”

Collins worked to overcome one challenge after another for the genome project, not the least of which was its 2005 deadline. “When I took the job I doubted in my heart that we would make the deadlines,” he recalls. “It seemed like, OK, that was a good idea to set a date in terms of marketing this plan, but at some point we’re going to have to admit that we’re not going to make it. This was white knuckles all the way through.”

Not that Collins takes credit for the completion: the heads of the genome centers “whipped themselves mercilessly” during the year leading up to the much-anticipated April announcement, he says. But he’s still crowing about the project coming in two years early and $400 million under budget.

During this time, Collins’ own genetics lab continues to churn out research, publishing about 10 papers per year on genes implicated in various diseases, from early work in cystic fibrosis to the recent finding of a gene linked to progeria. The lab provides Collins with an important “reality check,” he says, making sure he’s always thinking of genomics as it relates to human health.

Forgetting about that, while unlikely, is something of a danger for the field. “While I could get very excited and feel very satisfied about the intellectual aspects of genomics, it’s not enough,” Collins says. “Until we can get to the point where we can say we changed medicine … then I’ll feel like we got somewhere.”

Other challenges facing the field include a shortage of people with the kind of interdisciplinary training critical to the advancement of genomics, as well as fears about not enough funding as the NIH budget approaches a potentially less bountiful future, Collins says.

In the meantime, Collins keeps urging the genome institute forward. One of his proudest accomplishments is the vision for genome research that was published simultaneously with the April celebration. “That must have gone through at least 42 revisions,” he says.

NHGRI is steaming ahead, too. “We’re already into a whole bunch of other things,” he says, pointing to the HapMap, the Encode project, and the latest initiative in chemical genomics — a push to “put small molecules into the hands of investigators” that was given more oomph thanks to the new NIH roadmap presented last month.

These are all pieces to the puzzle of getting genomics to matter in healthcare. Collins’ dream is “that we would use the tools of genomics to understand the causes of common disease — diabetes, heart disease, mental illness — in a way that would lead to prevention and cures,” he says. “I think I can see that dream come true in my lifetime, but we have a lot of work to do.”

— Meredith W. Salisbury

The Scan

US Supports Patent Waivers

NPR reports that the Biden Administration has announced its support for waiving intellectual property protections for SARS-CoV-2 vaccines.

Vaccines Versus Variants

Two studies find the Pfizer-BioNTech SARS-CoV-2 vaccine to be effective against viral variants, and Moderna reports on booster shots to combat variants.

CRISPR for What Ails You

The Wall Street Journal writes that CRISPR-based therapies could someday be used to treat common conditions like heart attacks.

Nature Papers Review Integration of Single-Cell Assay Data, Present Approach to Detect Rare Variants

In Nature this week: review of ways to integrate data from single-cell assays, and more.