Skip to main content
Premium Trial:

Request an Annual Quote

Hanash on How Far Proteomics Has Come and Still Must Go

Premium

Samir Hanash
Principal Investigator
Fred Hutchinson Cancer Research Center
Who: Samir Hanash
Position: Principal Investigator, Fred Hutchinson Cancer Research Center, 2005 to present
Background: Professor in pediatrics, University of Michigan, 1989 to 2004; director, U. of Michigan Cancer Center, carcinogenesis; founding president of the Human Proteome Organization, 2000 to 2003.
 
At last week’s HUPO annual conference in Seoul, South Korea, Samir Hanash spoke on a panel discussing the challenges facing clinical proteomics. [See related story this issue]
 
ProteoMonitor spoke with Hanash this week to go over some of the remarks he made during the session as well as remarks made by others on the panel.
 
Below is an edited version of the conversation.
 
At HUPO you described yourself as an optimist about where clinical proteomics is heading. Why?
 
For a number of reasons. In my mind, the current technology that we have available is really quite far reaching in terms of being able to do in-depth analysis and being able to do quantitative analysis. Those are things that were lacking in the initial days of proteomics, and one had to do somewhat superficial analysis … to find some very interesting proteins that [told] you something [such as] what is the disease state that someone is working on.
 
But now, I think it is definitely within reach to be able to do in-depth and quantitative types of proteomic studies.
 
Is it progressing at the rate that you thought it would four or five years ago?
 
It’s hard to predict how fast things would move, but the point of it is that we are reaching substantial depth of analysis at the present time. So we’re able to do comprehensive proteomics in a meaningful way, which was perhaps five years ago maybe a little bit difficult to do.
 
What is your work focusing on in your own lab?
 
Our obsession right now would be cancer biomarkers for early detection. This is what we’re doing very intensely. We’re working on epithelial types of tumors, the common types, so colon, lung, and breast.
 
What types of approaches are you taking? What types of technologies or platforms are you using?
 
Basically, we’re taking multiple approaches in terms of experimental designs. And so on the one hand for early detection, we’re relying on samples that were collected when people did not know that they had cancer. They were asymptomatic at the time, so if we find changes at that early stage, they would be ideal markers for early detection.
 
This is different from working with newly diagnosed patients who at the time of diagnosis obviously have all types of ailments that may contribute to changes in the plasma that may have nothing to do with the type of cancer that they have. The fact that we are able to access blood samples before diagnosis and ask the question at that early stage — ‘What sort of markers can we identify?’ — I think is pretty promising.
 
This is an issue not of technology, but an issue of experimental design.
 
That’s one thing. The other aspect is that we are really moving away from discoveries that are being done in an individual lab in isolation with the private collection of samples that nobody else has access to or might have really limited relevance to the disease state in a general fashion.
 
We’re moving in a direction where discovery is being done in multiple labs, working together, each one using somewhat of a different platform, but integrating the data.
 
And this, I think, is going to be a model that’s going to be used more and more in the future.
 
What other labs are you collaborating with?
 
Quite a few. We’re collaborating with the Harvard proteomics group. … We’re collaborating with Bill Hancock at Northeastern University. We’re collaborating with other people in Seattle; and we’re working on informatics aspects of proteomics. Quite a few groups, actually.
 
At HUPO you mentioned the human plasma proteome project as one of the success stories of proteomics. What other success stories are there?
 
I don’t want to exaggerate the success of the HUPO plasma project but I think it was a pretty useful project in terms of [it] being one of the first ones that brought together multiple investigators to work together on the same project and the same samples.
 
That … perhaps got us inspired to want to move from doing such a collective project on normal samples to now wanting to organize groups to work on cancer biomarkers or disease-related applications, again, using the same approach of sharing samples and sharing data.
 
Are there other success stories?
 
There are a number of other collaborative projects of that sort. For example, HUPO is finishing now phase one of its studies to look at CSF in partnership with the Huntington’s disease foundation, the High Q Foundation, so it’s looking at potential markers in an organized effort, in this case, in partnership with the private sector.
 
We have just finished a colon cancer discovery project, again phase one, where 10 labs got together to apply proteomics platforms to look at finding markers for colon cancer. And that also is basically very interesting in terms of the strategy, but also in terms of the candidate markers that have been identified that are going to go into validation.
 
During the panel discussion, you and some of the other panelists suggested there is too much of an emphasis on standards. What did you mean?
 
I think the issue of standardizing proteomics came to the forefront in the wake of studies where people [were] using MALDI-TOF mass spectrometry to find peaks that may be potentially markers for cancer, and then those types of findings were very difficult to reproduce in other labs. It created this notion that proteomics is not reproducible or proteomics needs some reproducibility type of assessment.
 
This is too much of a sweeping generalization about proteomics based on one type of very limited experience. The other [problem] is you’re lumping together all applications of proteomics simultaneously.
 
There are people who want to standardize, for example, the [methods] you choose for discovery. And I think that this is an incredibly futile concept because discovery relies on people’s abilities to innovate, to change, and wanting to standardize how you do discovery just does not make any sense.
 
At the same time you may standardize today, but then the effort that went into all of that type of standardization, the instrument changes, the approach changes, then you’re going to be in a constant spin cycle trying to standardize something that’s constantly changing.
 
So I think it’s a waste of effort to collectively want to engage in standardizing discovery technologies and discovery platforms. Instead of having rigorous standardizing of discovery, I think it’s very important to have rigorous experimental design, so that if you’re comparing A with B that A and B are matched for everything except that characteristic that you’re looking into.
 
And so having adequate samples becomes very important, but standardizing technology for discovery just does not make any sense.
 
Once you have discovered something, you have to make sure that it’s valid and then the validation steps have to be standardized so that you’re able to determine with good precision and good reproducibility the analyte you’re proposing, let’s say, as a biomarker.
 
To say that all of proteomics needs to be standardized no matter what the intent or application is does not make sense.
 
Is that counter to what HUPO is trying to do with standards?
 
No, what HUPO is trying to do is develop standards that people can embed into their experimental design, so they’re not imposing their standardization of external designs or standardization of the technology.
 
It’s making standards available so that you’re able to, within your own labs, see how reproducible your assays are, to see how your application, your technology performs, compared to somebody else’s using the same standards.
 
That aspect is quite useful and certainly it’s optional. Nobody is saying that you have to use those standards. It’s just facilitating availability of standards for when they are needed.
 
Another thing you said was that there need to be milestones in order to measure success in proteomics
 
Absolutely, this is very important because the scientific community and everybody else, when they look from the outside in, they have a hard time understanding where proteomics is. We have not done a good job explaining what the various challenges are in proteomics, and how as a field we are addressing them, and how we are making progress.
 
The best way to demonstrate progress is through having some kind of milestone [such as] at this point we can analyze X number of proteins or peptides simultaneously, but we think we need to be at this other level in terms of depth … and showing the world that as a field we are making progress on the one hand.
 
And on the other, showing that with the current technologies that we have, whatever their limitations, this is the type of performance that we can expect.
 
It seems to me that there’s a lot of confusion at the present time as to what proteomics can and cannot do, and when it’s left to everybody’s own devices to figure it out, then some people may look at bad examples of proteomics [research] and judge the entire field just on that basis.
 
Is it a matter of having milestones or having milestones that aren’t too lofty?
 
I’m not aware that milestones were ever established, so some people may have the impression that some things are doable today when they may not be doable. And others have looked at the field and decided that nothing can be done when, in fact, quite a few things can be done. So it’s very wild right now.
 
At this point what type of milestones do you think are appropriate?
 
I think a good set of milestones would be to look at, for example … what can be done by analyzing gene expression at the RNA level and ask[ing] the question ‘Can we match that at the protein level?’
 
So in a tissue if you can find 5,000 or 10,000 genes expressed well, can we find their protein product? [We need to] do some kind of exercises that determine what is the depth of analysis that’s currently achievable using proteomics and set up some milestone [for] how we can go from where we are [now] to everything we should be at.
 
And inform the world about it.
 
On the issue of biomarkers, there was quite a lot of discussion at the conference on this subject. What do you think have been some significant bottlenecks in terms of getting them to the FDA for approval?
 
There are quite a lot of bottlenecks, and I divide them into bottlenecks for discovery and bottlenecks for actual validation.
 
In terms of bottlenecks for discovery, I think access to very good specimens to do the discovery, in my view, has been the major bottleneck. People have had to rely on ad hoc types of samples that had perhaps biases built into them, so that in the end, the studies were not as optimal as they could be. And I think we’re making progress in the sense that the most informative and valuable samples are becoming more and more available for discovery.
 
Now, in terms of the validation to the point [where we can get a biomarker] to the FDA, the major challenge that we have is that there are, at the present time, many, many candidates and people are at a loss in terms of trying to figure out how to validate them.
 
And the issue is not just technological. The issue is that they are coming from diverse sources and it’s not clear how to combine them for validation purposes. So, if you take the example of lung cancer, somebody might find a marker that’s positive in 20 percent. Well, is that worth pursuing?
 
And somebody else might find a marker that duplicates another marker. Is that worth pursuing? On the other hand, there’s this expectation that you want your marker to be relevant to most of the subject for which you are proposing it as a biomarker. And that’s unrealistic.
 
So how to merge the dozens, if not hundreds, of candidate markers and how to do that in a continuous process – let’s say you design a very definitive study to look at 10 markers [and] tomorrow, there’s yet another marker that comes out. How are you going to include it in the panel that you just finished testing?
 
There are a lot of iterative processes that we still have not yet figured out when it comes to validation of biomarkers.
 
You were the founding president of HUPO. What do you think HUPO’s task is now and how does that compare to when you were the president?
 
When I started, it was our responsibility to chart a course, and at the time it was not very clear what that course was going to be. There were a lot of options. One option would be for HUPO to become another kind of scholarly society, organizing an annual meeting and having some activities to promote the field, but not engaging directly in bench work or activities that would yield products.
 
We would be this scholarly type of thing.
 
At that time, it seemed that there was a need to bring people to work together on projects. And that was the idea behind the plasma project. Nobody was doing cooperative work in proteomics, and somebody needed to promote that. And HUPO did its share of promoting this type of collaborative activity, whether it was in terms of bench work via the plasma proteome project, or in terms of informatics like with the Proteomic Standards Initiative.
 
Those were the kind of collaborative activities that HUPO got engaged in. In the past few years, governments have become involved in somewhat helping organize these kind of lab-based activities in proteomics, and so I think it is perhaps more appropriate right now for HUPO to be the scholarly society as opposed to one that [spends] millions of dollars to engage labs in the pursuit of doing proteomics type of research.
 
I think gradually we’ve matured into a scholarly society, organizing annual meetings, workshops, and things of that sort.

File Attachments
The Scan

Not Yet a Permanent One

NPR says the lack of a permanent Food and Drug Administration commissioner has "flummoxed" public health officials.

Unfair Targeting

Technology Review writes that a new report says the US has been unfairly targeting Chinese and Chinese-American individuals in economic espionage cases.

Limited Rapid Testing

The New York Times wonders why rapid tests for COVID-19 are not widely available in the US.

Genome Research Papers on IPAFinder, Structural Variant Expression Effects, Single-Cell RNA-Seq Markers

In Genome Research this week: IPAFinder method to detect intronic polyadenylation, influence of structural variants on gene expression, and more.