Skip to main content
Premium Trial:

Request an Annual Quote

Big Science vs. the R01

Premium

When US President Barack Obama named Francis Collins as his choice for director of the National Institutes of Health, it was no surprise that people immediately began debating the pros and cons of the candidate. What was surprising, at least to much of the genomics community, was that one of the arguments against Collins centered on arguably his greatest achievement: leading the Human Genome Project.

How did Collins' accomplishment — one that at the time landed him a photo op at the White House alongside then-President Bill Clinton — become a potential Achilles' heel? Quite simply, the fear was that this HGP champion would prove more favorable to "big science" at the cost of small-scale, investigator-driven approaches that have served as the bedrock of the US scientific enterprise.

Indeed, it was enough of a point of contention that Collins brought up the issue in his first address to the full NIH community when his appointment became official last fall. "The mainstay of NIH, both intramural and extramural, will be the creativity of individual investigators," he said. "Those who fear that this guy who used to direct the genome institute may only be interested in supporting big science need look no further for re-assurance than the character of the NHGRI intramural program which I was asked to found in 1993."

Since the success of the Human Genome Project, the megascale approach to science has gained traction in the life sciences field — and stirred a growing debate in the research community about how to balance funding for these consortium programs with the R01-type grant that has long been the bread and butter of NIH's award efforts. The concern is about more than just the allocation of money, though. The large-scale projects — HGP has been followed by consortia for ENCODE, HapMap, 1,000 Genomes, and the Cancer Genome Atlas, to name just a few — are seen as top-down science. One or more funding institutes decides on a goal, slices that up into milestones, and issues very specific RFPs inviting scientists to express interest in joining. That's a far cry from the typical bottom-up grant process, where scientists themselves draw up proposals based on what's going well in their labs or what's most compelling to them, and then pitch those ideas to funding agencies in the hopes of getting money to pursue them.

The institute that most represents this in the genomics world may well be NHGRI, where newly appointed Director Eric Green inherits a number of ongoing large-scale programs. "Clearly, NHGRI has a history of being very effective and showing leadership for large consortia projects," Green says. Looking at the HGP and some of the programs that followed in its wake, he says, "Those would have been very difficult to accomplish in smaller-scale efforts." The bulk of the institute's funds go toward these kinds of programs, but Green notes that NHGRI's support for R01 grants — including the $1,000 genome sequencing technology grants, for example — has been noteworthy.

Critics of megascale science contend that the top-down approach quells real innovation and prevents researchers from the type of serendipitous findings that have historically been huge turning points in science. Proponents of these projects argue that they're a more efficient way to spend limited funds while getting the most from them, and that they allow the community to achieve far more than any individual or small group of individuals ever could.

In the grand scheme of things, very few people believe that funding agencies should devote themselves entirely to either of these approaches. Most scientists agree that the real issue is not eliminating one of the options, but rather finding a happy balance that makes the most of consortium-style projects while also stimulating innovation by supporting great research dreamed up by investigators. "There is no doubt in my mind that the bulk of NIH funds should be spent on R01s," says Aravinda Chakravarti, director of the Center for Complex Disease Genomics at Johns Hopkins University School of Medicine and a scientist who has made his mark both with R01-style grants and as a member of various consortia efforts. He says that for the right goals, though, big science is the way to go. "If we can articulate the need well — and the need is to a broad biological community — I think there will be a lot of support [for large-scale science]," he adds.

Pro and con

The real advantage of these large-scale programs lies in the databases and other infrastructure they generate. Done properly, these resources are so enabling that they can change what's possible for the rest of the scientific community. "You need the large projects that really galvanize effort [and] get people working with each other across groups to create these resources that would not be possible in any one lab alone," says Pardis Sabeti, an assistant professor at Harvard University. "You are working together to create a vast resource."

But that rewarding endpoint calls for a very different atmosphere than exists in investigator-initiated research projects. "It's very much an industrial challenge," Sabeti says. "It's more corporate in the way you have to do it because there is a hierarchy, a game plan, timelines. You really are producing."

That need for oversight means conference calls, meetings, and more just to make sure the individual pieces of the program are on track. "There's a lot of time spent in communications and discussion," says Matthew Meyerson, an associate professor at Dana-Farber Cancer Institute and member of The Cancer Genome Atlas. "That's both a plus and a minus because it gives you a lot of checks on your analyses on the plus side — on the minus side, it does take time."

Mike Snyder at Stanford, who's been a participant in the ENCODE project, says that this oversight has boosted the quality of the results in these types of programs. "You're coordinating your efforts with other groups, and that's turned out to be one of the positive aspects of the project because it avoids duplication. It has also led to data quality standards that typically don't get done in R01 research."

Indeed, the management that's inherent in these big programs is part of what makes the ultimate resources and tools so valuable. "If it's not something that small science can do effectively, and you really need management and you really need a centralized structure, this is where big science is good," says Steve Henikoff at the Fred Hutchinson Cancer Research Center and a member of modENCODE. The tradeoff to that is losing the best qualities of "small science," he says. "Small science has huge advantages. You got creativity, you got innovation, you got flexibility, you're helping people think for themselves and be independent."

Ensuring that all participants get proper credit is another challenge for the consortium approach. With dozens or hundreds of scientists pitching in, even truly great contributions can be lost in the shuffle of getting the project completed. While there are ways for scientists to try to ensure they get credit for their work — such as publishing supplemental papers showcasing, say, a new analysis method — there's no guarantee. Meyerson says scientists should accept that up front. "To do this, I think you just have to say, 'I'm going to go ahead and do this because it interests me, and I think the results are going to be important and I'm not going to worry too much about its impact on my career.' But that's a hard thing for people to do."

A potential disadvantage to consortia is the size of the bet being placed: that is, with so many resources being poured into the project, the fallout from failing to achieve the goals can have major ramifications. "The stakes are bigger when it's a large consortium," Green says.

The cycle

While people debating the topic tend to focus on the tension between funding for large-scale efforts and for R01-type grants, the arguments rarely address the symbiosis that actually exists between these two approaches.

For one thing, the consortium model offers a way for individual labs to plug into ongoing efforts and, while contributing to the large-scale project, hone their own expertise for future work that will be investigator-initiated. David Goldstein, who directs the Center for Human Genome Variation at Duke University's Institute for Genome Sciences & Policy, says that his lab is a great example of this. He won a grant for genetics work linked to a large program about the biology of HIV. "A group like mine, which prior to that didn't have any exposure to HIV-related work, really benefits from having that network of collaborators already committed to the group effort," he says.

Sabeti at Harvard says the HapMap members show how participating in a consortium can be a boon to future R01 research. "There's a lot of researchers that learned how to curate, generate, and analyze an enormous data set — and they all went off and got genome-wide association study grants," she says. In that way, being part of the bigger project can help jump-start each scientist's research programs going forward.

In the bigger picture, it's also important to remember that what is now big science may one day be something a lab knocks off in an afternoon. Technological advances in particular shape what may be thought of as a spectrum that starts with consortium-sized efforts and ends with R01-scale work — but without the major project effort at the beginning, it's possible the demand for those leaps in technology would never come about. The HGP is a good example of that: If scientists hadn't built the first reference genome sequence and thereby opened up endless new questions about variation and other genetic phenomena, would there have been such a massive push to develop next-generation sequencing platforms? There's no way to answer that, but what is clear is that those new tools are what made it possible for individual labs to perform genome sequencing projects today.

"All of a sudden, what used to be a consortium-based project in terms of its scientific scope now becomes an individual researcher's project," says NHGRI's Green. "And I think that changing landscape will immediately create an opportunity to have discussions about when a consortium makes sense."

Striking the balance

Essential to finding the right balance between large-scale programs and R01-style efforts is understanding which approach is best in each situation. In the case of something like the cancer atlas project, for example, the decision was easy. "It simply wasn't feasible to think of doing cancer genomics on this time scale, with this breadth of analysis ... without assembling these large teams of researchers and giving them the proper support," says Bradley Ozenberger, TCGA program director at NHGRI.

The way Jason Lieb, an associate professor at the University of North Carolina, Chapel Hill, and a member of modENCODE, sees it, "There are some projects ... that really require a multidisciplinary approach where everybody's on the same page in terms of standard samples, standard data analysis metrics and methods, standard experimental protocols. ... That really is necessary sometimes if you want to achieve certain goals."

Following from that, Lieb says, it makes sense that NIH and other funding agencies will have to place their bets accordingly. "From the NIH's point of view, they have a portfolio of research that they support and some aspects ... are what you might call large-cap stocks — they're big projects that require that sort of infrastructure. And then there's small-cap stocks, or smaller projects. ... They're more nimble, they're more innovative, they might be more focused." Lieb sees the debate about R01s versus megascale projects as something of a red herring. "I wouldn't say that [consortium projects] are competing with R01s because I think that they have different purposes," he adds.

According to Ozenberger, "It's important to emphasize that the R01 process at NIH is crucial to the mission. ... We don't see a reduction in NIH's emphasis on R01. Yes, there appears sometimes to be a tension between R01 funding and large projects, but there's certainly room for both — and a need and a desire to support both processes."

While Chakravarti sees real value in megascale projects, he hopes to avoid having consortium projects just for the sake of having them. "I think keeping individual investigators funded with creative ideas ... is probably our greater challenge and biggest need," he says.

Duke's Goldstein says that having a good balance is important both for accomplishing goals that are too audacious for R01-style grants and also for avoiding the risk of "groupthink" that can crop up in consortium models. "It's certainly essential that we have both going on in the community," he says. Nobody wants "a situation where science is only done by committee," he adds. "It's really critical that we ensure a combination of big, centralized efforts and smaller, investigator-driven efforts so that we have diversity of perspective and approach."

As head of NHGRI and a longtime NIH scientist, Green is familiar with the debate. "The classic argument is that the fewer R01 grants you give out, you have a smaller percentage chance of having a really creative idea bubble up from an individual investigator, which is exactly why you don't want to take R01s down to zero," he says. As planning for

NHGRI's next set of long-term goals gets underway, Green says, he's not worried that the consortium-versus-R01 debate will stifle true scientific progress. "[The] planning process will articulate a set of priorities in genomics, and those will not be articulated by big centers or small centers or consortium grants or individual investigator grants — they will be articulated by compelling scientific opportunities," he says. "We're going to let the science drive the programmatic structure, and not vice versa."

The balancing act is just as much a struggle for individual researchers as it is for funding agencies. Each scientist has to determine his own comfort level with working in a large consortium, and also be able to stake out time for his own research. Sabeti says that she was able to optimize her involvement in the HapMap project by taking the analytical approach she developed for consortium data and applying it to her own research projects. "I contribute to the main [HapMap] paper and also take away with my own project," she says.

Ultimately, Sabeti notes, each scientist has to figure out the right mix. "If you don't have your own research program, then you'll get frustrated over time. But if you don't give all you can to the consortium, the consortium also loses luster." If you are going to get involved in a large-scale project, make sure it enhances your own research rather than detracts from it, she adds.

Chakravarti says it's important for scientists to learn to work well in both models. "If I worked only on the large projects, I would probably not be very interested because they constrain me," he says. "I like my R01s — I can have flights of fancy, I can test something that is bizarre until it's proven right. I can pursue things that I don't have to wait for somebody else to tell me [to do]."

With reporting from Ciara Curtin and Jeanene Swanson

File Attachments
The Scan

More Boosters for US

Following US Food and Drug Administration authorization, the Centers for Disease Control and Prevention has endorsed booster doses of the Moderna and Johnson & Johnson SARS-CoV-2 vaccines, the Washington Post writes.

From a Pig

A genetically modified pig kidney was transplanted into a human without triggering an immune response, Reuters reports.

For Privacy's Sake

Wired reports that more US states are passing genetic privacy laws.

Science Paper on How Poaching Drove Evolution in African Elephants

In Science this week: poaching has led to the rapid evolution of tuskless African elephants.