Skip to main content
Premium Trial:

Request an Annual Quote

NCI Proteomic Initiative Director Looks For Ways to Improve Cancer Research

Premium

Henry Rodriguez
Director,
Clinical Proteomic Technologies Initiative
National Cancer Institute
Name: Henry Rodriguez
 
Position: Director, Clinical Proteomic Technologies Initiative at the National Cancer Institute, 2006 to present
 
Background: Group leader, cell and tissue measurements group, National Institutes of Standards and Technology, 2004-2006; program analyst, Office of the Director, NIST, 2003-2004; program manager (tissue engineering safety program), DNA technologies group, NIST, 2000-2004; PhD, cell and molecular biology, Boston University, 1992.
 

 
Last year, Henry Rodriguez was named director of the NCI’s Clinical Proteomic Technologies Initiative, a five year, $104 million effort by the NCI to evaluate current proteomic technologies and to develop new ones for cancer research.
 
In the fall, the CPTI awarded more than $91 million in funds to researchers across the US, and expects to dole out another $12.5 million later this year.
 
ProteoMonitor spoke with Rodriguez this week about CPTI and what he hopes it will accomplish. The following is an edited version of the conversation.
 
Tell me about the NCI’s interest in proteomics and what it hopes to achieve in this area in terms of cancer research.
 
What I head up is a new initiative that was just recently launched last calendar year and that’s the Clinical Proteomic Technologies Initiative. … What NCI has done, and it’s actually quite exciting, is that back since 2002, they’ve been putting on these workshops where they’ve really challenged the community and they’ve asked the question, ‘Why are a lot of the discoveries not truly translating to the clinical environment?’
 
So, it’s a program that wasn’t really created internal to NCI. They brought in the key thought leaders, engineers, the clinicians, oncologists, and the researchers. And they really tried to answer where the variability happens to occur when one conducts proteomic measurements. And at the end of the day … the consensus was that [it was necessary] to build a foundation at NCI, which would basically involve assessing the various platforms of technologies that proteomics researchers [use] and that the folks within the clinics eventually wish to use.
 
At the same time, not just to look at technologies, but also [to] develop reagents and resources that would be given back to the public that would further enhance not just the translating of proteomics but also basic research that’s conducted by a lot of academics.
 
What happened between 2002, when the workshop took place, and the time CPTI was created?
 
Originally, it started in April 2002. There actually was a workshop held in Bethesda, [Md.] and that was the proteomics planning workshop. That got the ball rolling. And that was the culmination of three [institutes and centers] — NCI, NHGRI [National Human Genome Research Institute] and NIGMS [National Institute of General Medical Sciences].
 
From that period alone, there were several other workshops and finally at the end of 2005, the most recent [workshop] focused more on the field of protein affinity captured-based technologies and the various reagents that would be required for that. That became the impetus that eventually [led to the] plan that was finally approved by the National Cancer advisory board and by NCI’s executive committee. So, last year, what we ended up doing was we had [requests for proposals that went out for the various programs. Folks applied to it, the review process occurred last summer, and the awards were given out as of last calendar year.
 
[The CPTI] involves three distinct components. The one that a lot of people are becoming familiar with is this center-based program. That one literally has developed five centers, lead centers, and that involves the Broad Institute, the Memorial Sloan-Kettering Cancer Center, Purdue University, Vanderbilt Medical University, and the University of California, San Francisco [which in the fall were given a total of $35.5 million].
 
The $35.5 million is spread over a five-year window. That one is really to evaluate existing proteomic platforms just to make sure that you reliably are able to identify, quantify, and compare peptides and proteins in complex biological mixtures. And really what they’re assessing are mass spectrometry-based technologies. But what’s nice about it is at the same time, they need to complement it with affinity-based platforms. So, now you’ve got a program that has that one component, which is looking at these assessment centers.
 
You need to generate new sorts of algorithms, bioinformatics tools and capabilities. You also need to develop new hardware infrastructure for a lot of the affinity-based platforms. So that’s the second component, which is another RFA … and that one specifically is funding individual, investigator-driven projects. And that has both R01-based mechanisms and technology-based mechanisms.
 
The R01 grantees there are developing what we like to define [as] the next-generation bioinformatics tools. And then the technology ones — we’ve challenged the community to say, ‘Well, if you look at where the field is today, and if the tools that we currently have are not going to be optimal for translating to the clinic, can you develop that next hardware infrastructure?’
 
And those are going to be predominantly based on affinity-based technologies.
 
And the very last component to this, which has a spread of $12.5 million over five years, is to create reagents and resources that are going to be used not just by the investigators of this program, but what’s very nice about this is those same reagents and resources are actually going to become something that the cancer community gets to have at the end of the day.
 
Those are going to be done through RFPs. And we’re anticipating those RFPs, hopefully by the end of this first quarter, or if not, by the second quarter of this calendar year. Right now, what we’re trying to do is to identify what exactly are going to be the needs of the community and, of course, of our networks. Because we’re relying upon them to say, ‘Well, this is what’s needed for that community.’
 
From the NCI’s perspective, what is lacking technologically? What are you hearing from researchers in terms of things they want from the technology?
 
One of the reasons I like this program is [it] relies on the outside extramural community to give us guidance on what is needed within the field of proteomics. That was one of the bases why back in 2002, they started all these workshops. What was produced from those workshops is they identified that challenges definitely do exist when you look at clinical measurements of proteins.
 
We’re looking at clinical proteomics here. Our ultimate goal is to take the fundamental science and then develop the tools and the protocols and all the regiments, so that you could finally take that application into the clinic. So at the end of the day, when you generate a measurement between a cancerous state and a non-cancerous state, you’ve the assurance that any sort of variability that could occur upstream to the measurement either is identified or can be accounted for. When you do detect a change, that change is definitely going to be attributed to a biological change.
 
What were the challenges identified back in 2002? Has the NCI identified a few areas that it would like to see addressed?
 
Absolutely. It’s clear that there are pervasive problems with both research designs and the way data is being analyzed. There are problems with reproducibility, and there are also problems with comparability of the research results from laboratory to laboratory, or even [within] a laboratory from run to run.
 
At the same time, another challenge is … a lack of common reagents and highly qualified public data sets that people are able to extract knowledge and information from. Clearly, there’s an ineffective and inefficient transfer of platform technologies to clinical applications. There’s also an inability to manage and interpret all these large sorts of data quantities and also pre-process data. And what they told us is that essentially the private sector is either going to be unable or they’re going to be unlikely to address these sorts of challenges.
 
It’s based off that that this program actually got put together. And as you can see the various components address those sorts of challenges.
 
I think the reality is that there’s really no single technology platform that’s going to be able to satisfy all the sorts of proteomic measurements that people wish to do. At the same time, you can look at it and you can always say there’s no really mature, true proteomic technology. Because of that, there are really no performance criteria. In other words, if you go through the literature, you find out that the community’s coming back and saying there’s really poor confidence in protein measurement results. There’s a lot of difficulty in [reaching] agreement among different experiments within laboratories and among laboratories, which is absolutely needed for a clinical-based setting.
 
What we’re taking is an unbiased approach, so we’re simply going into the picture and then we’re going to try to identify where the sources of all those variability might occur. I think the very unique part of our program is it doesn’t rely upon one site. We’re working very closely also with folks that have expertise in experimental design, so they could come back and say, ‘Before you conduct that experiment, we’re going to identify where bias might occur.’ So that way, we can adequately address it.
 
One of the common complaints you hear is the lack of standards. Would you agree with that assessment?
 
The way we like to look at that is, as opposed to jumping in and developing these sorts of standards, we’ve been very careful with our program…I think what’s absolutely vital … is to make sure you understand what the problem is. The problem might not be a standard that’s needed at the end of the day. It might be more of an understanding of the technology and the platform.
 
So what we’re doing is developing common reagents that are going to be used in assessing these platforms. But really at this point, these materials, we’re defining them more as reference materials that people can use to benchmark these sorts of technologies. At the end of the day, we come back and then we make that judgment call [whether] a standard is truly needed now to move this field forward.
 
Has your office spoken with vendors and tried to address some of these issues? And if it has, what are you hearing from them?
 
We actually are working very closely with the various vendors, and not just the reagent vendors, but also with the instrument manufacturers. And before our program got launched, we actually did host a conference call with numerous mass spec manufacturers. And just like the name implies in our office, it’s the Office of Technology and Industrial Relations, so we believe in working very closely with the private sector. And there are many advantages that the private sector could always have with us. They would be a partner with our programs. They get to work very closely with the various groups within our programs. At the same time, a lot of the knowledge and understanding of these technologies that’s coming out of this can be shared with them instantly.
 
So, the answer is yes, we are working with numerous vendors, numerous manufacturers, and our doors are always opened to listen to what they have. At the same time, vice versa.

File Attachments