Name: Henry Rodriguez
Position: Director of Clinical Proteomic Technologies for Cancer at the National Cancer Institute
Background: Leader of the Cell & Tissue Measurements Group, National Institute of Standards and Technology; Fellow Department of Medical Oncology of the City of Hope National Medical Center
The National Cancer Institute's Clinical Proteomic Technologies for Cancer Initiative launched in 2006 as a five-year, $104 million program aimed at building a foundation of technologies and standards to advance the application of proteomics to cancer research. The initiative established five multidisciplinary, multi-institution research centers and developed collaborations with more than 60 public and private institutions around the world.
Last month, NCI announced that it would be providing $75 million to $120 million to fund a second five-year phase of the CPTC. This phase of the initiative seeks to build on the work of the first phase by establishing a collaborative team of six to eight Proteome Characterization Centers that will work to systematically define the functional cancer proteome and discover and verify protein cancer biomarkers.
Henry Rodriguez is the NCI's director of clinical proteomic technologies for cancer. He spoke this week to ProteoMonitor about the CPTC's new funding opportunity and what direction the initiative's research will take going forward.
Below is an edited version of the interview.
What questions was the first phase of the CPTC initiative designed to answer?
It's not a secret that the field of proteomics holds great promise, but at the same time it's one that has faced a lot of hurdles and issues. One of the things that the NCI did several years ago was they started hosting a series of workshops, and the consensus out of about five or six of these workshops was that they absolutely believe in the science, but they're having tremendous difficulty in determining whether or not the data that's being produced or the interpretation of the data is [something] that they can actually put their emphasis upon. Because they really did not know whether the [biomarker] changes that were being detected in these experiments could really be attributed either to the biology or any sort of bias or variability along the whole pipeline that was used to generate whatever people were claiming out of biomarker discovery.
The conclusion was that the institute should develop a program that would first look at – A, how do you identify, and B, how do you address where variability occurs when one develops a proteomics biomarker pipeline. And that's really what the first phase was focusing on.
How does the recently announced funding opportunity fit into that? Is it designed to just continue this work, or does it signify any sort of change in the project's direction?
If you look at the field of proteomics the way the pipeline would normally work, you've got kind of a two-stage program. On the front end you've got people that are doing discovery using whatever methodologies they happen to specialize in. Ultimately what you're going to come up with is this huge list of candidates. The question becomes how you winnow down that list to ultimately decide which are the high-priority candidates that you want to move downstream to a qualification study, which is the ultimate goal – to get something into a clinic. That space is pretty much dominated by using ELISA-based assays. Those are very well understood assays. There's no need to reinvent the wheel in that space. But you can see, if you were to draw a funnel, you have all this stuff on the front end, and now you're trying to develop these very expensive, time-consuming ELISAs on the backend. So we realized that in the preclinical space we could help further streamline the process by developing a middle stage that takes analytical tools to further credential the list using a high-throughput platform that is able to take a small pool – a larger pool than a qualification stage, but still a statistically pared pool of biospecimens – and determine whether or not those candidates you're finding can be detected in a larger cohort.
The reissuance is kind of what I would say is the next evolution of the program. We still maintain our core foundation that we’re recognized for with the development of standards, metrics, technology development, optimization, but now thrown on top of that is – let’s start going after clinical materials and running them through pipelines.
So the first stage was developing these workflows and platforms and confidence in the reproducibility and the standards and so forth, and having established them now you're ready to move on to actually applying them?
That's exactly it.
How will the biospecimens to be worked on and the biomarkers to be investigated be determined?
The way that we're going to do the reissuance is that at least at a baseline NCI will be providing biospecimens to the network. In terms of what are the potential biomarker candidates that would come out of it, it’s a very unique structure. Each applicant is to have a discovery unit and a verification unit. At the stage at which they conduct their discovery unit arm, the data that gets generated will be commonly shared and accessible to each of the other centers. Each of the centers will be applying their own unique approach to interpreting the data from their discovery arm and prioritizing the candidates [that] they think would be best to move downstream. At that stage the list from each one of these future sites would be shared commonly in what we’re dubbing a biomarker candidate sub-selection subcommittee. This subcommittee then will be looking at the information that came from each of the discovery units and they will have the flexibility to say, 'Hey, you know what, you came with this list, we think that's fantastic to move into this platform that you've proposed to conduct your verification stage,' but they also have the flexibility to say, 'You know what, this center came with these candidates and we think these candidates would also be best served if run through your technology'. Then that's how the process flows. They conduct the verification and then ultimately the information is shared amongst all parties.
[ pagebreak ]
So discovery and verification of a given biomarker could be done by two different parties?
That's right. Exactly. Because we don't know what the technologies are that people will be proposing. And I think that the scientific community, which are the people that will be applying, will be the best people to determine that for us.
So it's wide open in terms of what cancers and biomarkers are being looked at in the discovery phase?
Right now we have a short list of about eight cancer types. One of the things that we're going to be working closely with are these programs that are now interrogating the genome of different cancer types. So, for example, one great program here is a partnership between two institutes – the NCI and the [National Human Genome Research Institute] – on the Cancer Genome Atlas. So on their website there are about six or eight cancer types now that are next to be on deck to go through their in-depth genomic analysis. And at a minimum those are the ones that are on the plate right now for our program. We're hoping to minimize that list even more within the next couple of weeks and tell the community, 'These are the three or four specific cancer types that are going to be on the table. And what we're looking for from you is to tell us within your hypothesis-driven concept, how best would these cancer types be interrogated both from a discovery unit component and then moving things down into a targeted verification unit component.'
How far do you envision verification being taken? How close will these biomarkers be to being ready to use clinically at the end of this process?
This discovery unit and this verification unit are really the preclinical space. But what we want to do is work very closely with programs that are involved in the qualification space – inform them at every stage what we're doing, have them be part of the program. And that way we're hoping that the candidates that come out of that verification stage would be highly credentialed, high priority, and hopefully more promising targets that could move not just into a qualification study, but hopefully come out as successful candidates.
So you'll be communicating with the Food and Drug Administration throughout the process?
Oh, absolutely. FDA and other programs that are involved in what they refer to as clinical validation, because ultimately those are the programs that would need to take these biomarker candidates and put them into a qualification study.
How about private industry? Is there any connection to industry in terms of taking the biomarkers into qualification studies?
We work closely with the Foundation for NIH. That's a great partnership that exists that is a way of collaborating with the NIH and the private sector. We do recognize that ultimately to get these things into the private sector, you have to work with the private community. So, again, our goal is to be transparent not just with the academic community but also with the [private] research community, because they both go hand in hand.
Does the shift from an emphasis on the development of methods and standards to a focus on biomarker discovery and verification imply any change in the sort of researchers who will be likely to participate?
The reissuance is open to everyone. There's no favoritism being played to the existing centers. And we're agnostic as to the platforms that will be proposed within this coming funding opportunity. But because we recognize that the reissuance is one that will be focused on a deliverable program in terms of data, assays, reagents, what we've defined are criteria in terms of – if you have a technology, at least put within the application how that technology has shown to be robust, how that technology has shown to be reproducible. So those are criteria that we've actually placed into the solicitation, and we're hoping as the technology has evolved that we'll be able to get new sorts of platforms. The key is to have a technology that is going to produce highly reliable and trustworthy data.
How will data from the project be shared?
In the first phase we recognized that getting data into the public domain is critical. What we didn't want to do was reinvent the wheel. So we looked at what the genomics community did, how they first created their "Bermuda Principles" that set the stage for policies on getting genomics data in the public domain [The Bermuda Principles grew out of a series of meetings held in Bermuda in the mid-1990s that ultimately required the rapid release of prepublication data in the Human Genome Project — Ed.]. Principles like that really didn't exist for proteomics itself. So we got a lot of input not just from the international community but also from our own centers, and they recognized that we need to move into that space. So about a year and a half ago we [held] an international workshop in Amsterdam and the specific goal was to start defining the principles on how to get proteomics data in the public domain. Now that's affectionately referred to as the Amsterdam principles, and we're hoping this fall to follow up with another international workshop that will build on those principles. Stage one was just getting the community to come to that consensus [on the need for principles on sharing data]. Stage two, which is what we're starting to work on, is to set in principle what exactly should be the metadata, what exactly should be the components that are there.