Skip to main content
Premium Trial:

Request an Annual Quote

Wishful Thinking? HUPO BPP’s Wish List Includes Cheaper Prices from Vendors

Premium

Michael Hamacher
Head of administration, Medizinisches Proteom-Denter
Ruhr-Universität Bochum
Who: Michael Hamacher
 
Position: Head of administration, Medizinisches Proteom-Denter, Ruhr-Universität Bochum, 2007 to present; scientific coordinator, Center for Applied Proteomics as part of the Innovation Platform Life Sciences within the Dortmund Technology Center, 2006 to present
 
Background: Co-founder, Project Management and Business Development in Life Sciences, 2006 to present; post-doc in Helmut Meyer’s laboratory, responsible for the HUPO Brain Proteome Project and funding, 2003 to present.
 
The Human Proteome Organization founded the Brain Proteome Project in 2003 with the goal of identifying proteins and studying their functions in human and mouse brains in order to shed light on neurodegenerative diseases.
 
Three years later, the group completed pilot studies that observed the research methods and approaches used by different proteomics laboratories. Based on those studies, HUPO BPP has created a wish list of objectives, suggestions, and timelines for the project as it moves forward.
 
The list is available in the May 2 online version of the journal Proteomics. BPP members will be fleshing out details of the list and expect to present more information at HUPO’s annual conference in Amsterdam in August.
 
ProteoMonitor this week spoke with Michael Hamacher, the article’s corresponding author. Below is an edited transcript of the conversation.
 

 
What do you and HUPO mean by ‘wish list’?
 
Maybe I will [give some background to the project]. We started in 2003 and made some pilot studies. … And everybody was very enthusiastic and said, ‘OK, we would like to reveal the proteome of the brain.’
 
Then we had to start a pilot study to define standards to see what was the level [of expertise] and see which applications, which machines, which gel-based and non-gel based analysis strategies in the consortium [were being used].
 
We came together and said, ‘OK, we would like to analyze mouse brain voluntarily.’ And it was sent out to all the groups around the world. Everybody had to make a proteome analysis with their own style. We could do 2D PAGE, we could do one-dimensional MS or two-dimensional MS, whatever.
 
They had to send the data to us so we could [do] central data reprocessing and analysis. It was clear that we would like to have an open database and data collection center. It took one and a half years [for] everybody to have experiments done, and we got an amazing, increasing amount of data.
 
Here at the Medizinisches Proteom-Center, we analyzed [the data], reprocessed it, and compared it [to what other groups in the project did].
 
And it soon became clear that one of the biggest problems is that everybody is doing their own style. The strategy is different. That’s not a problem, but even if you are doing the same sample with the same, let’s say MS strategy, you end up with different protein lists, mostly telephone books, all the same proteins in every list, but an astonishing amount of proteins were different from our analysis and their analysis.
 
It became clear that whatever you do, you should do it reproducibly so that you can show what you have done will end up with the same protein list [if someone repeated the experiment]. If you change something like sample handling, or if you change something like set-up, or if you change technicians, it most likely will end up with some other part of the proteome picture.
 
This is due to the fact that normally, you just see a small window of the whole because our techniques are not suitable enough or not specific enough, or whatever. If you see only a spotlight, everybody sees another spotlight, and if you put it all together, maybe you get a broader picture, but as soon as you don’t see the whole thing, you can’t say that the proteins that one group found are more right that [what] another group found.
 
Was this wish list then a broad guideline for how the community should be doing its research?
 
Yes, it’s a guideline for what we want our group that is taking part in our initiative to apply. This is very basic, you should use SOPs and you should annotate whatever you are doing. You should make your data open and accessible so that everyone can have a look and see if your data is reliable. That’s very easy, but nevertheless, we see that it’s not usual in the community to have these standards.
 
So after this pilot study, we thought, OK, we would like to start with brain proteomics but [beyond that] this is to promote our idea that you need these SOPs, that you could use our standards, data reprocessing procedures, because we have seen that if you are using different search engines, you will end up also with different kinds of lists because every algorithm has slightly different parameters or is stressing something different.
 
If you are using Sequest or Mascot, you are ending up with slightly different lists. And if you put [the data] all together, you will end up with many more proteins.
 
Will this 'wish list' be extended to other HUPO projects?
 
Yes, and I think this is already on the way because we are working closely together with HUPO’s PSI [Proteomics Standards Initiative] developing these standards. HUPO PSI did an extremely good job with helping us with this pilot study.
 
We are also organizing funding action within the European Union. This is called Proteomics Data Collection, and this is something that is very closely related to HUPO’s PSI, and of course, to what we and the HUPO BPP are aiming at. Here we define data repositories and XML files.
 
Much of the list deals with standards, which a lot of people, particularly PSI, are trying to tackle. Was it intentional to communicate and reinforce that message to the community?
 
Yes. You see, as we started that was not a topic at all. HUPO’s PSI was evolving but I think we were the first who stressed this point with our pilot study.
 
Everybody was thinking that proteomics would solve every problem, but if you’re ending up with this telephone list, you won’t convince anybody. We think it’s important everybody is following an SOP.
 
But, of course, we are not only formulating SOPs because with the brain, you have special conditions with cerebrospinal fluid or with tissue and so on, and this is a special case: Where do you get samples, for example. … Another big issue is collecting CSF sample — this is a great problem because CSF varies [from day to day and from patient to patient], and if somebody is doing CSF proteomics, you have to look very carefully if this guy has looked into the CSF and characterized it or handled it in the same way.
 
Also, where do you get tissue slides of the brain? Or [looking] at the post-mortem stability of proteins is another big issue, [being] done by Hans Kretzschmar in Munich, [who’s] also in our consortium. He’s also the leader of the European brain bank society.
 
Most of the wish list is pretty general and vague.
 
Yes, at the next workshop or [before] the next workshop, I hope that we will come together again and update this list. Of course, there’s not much detail in [the Proteomics article] because this is just a meeting report, but we are working on it.
 
The costs for high-tech experiments should be low [for example]. We think this is a critical point. If you are looking at DIGE technology, this is very expensive, and the cost for producing these dyes is not very high. We are in contact with GE, for example, and saying, ‘Can’t you lower the prices for academics, so that these techniques are used more widely?’
 
That’s a problem if people tend to repeat the experiments once or twice, the cost … is too expensive, but it would be better if these guys could reproduce their experiments and say, ‘This is profound data.’
 
Some of the items on the list are about the technology. What is the message you’re trying to say to the vendor community?
 
We are thinking if you would like to do real proteomics, you have to end up with reliable data. Reliable data means repetition, and repetition means low costs.
 
If you [want the public to become interested in proteomics] you have to show that proteomics is really doing something for disease, for therapy and diagnosis. To do so, you definitely need good prices and precision in the instruments from the vendors.
 
We try to argue with the vendors and say, ‘For academics, please lower prices for dyes … or show that [you have] very precise instruments.’ If you have bad resolution or specificity, [there is] no need to use these techniques, because you will end up fishing in the mud. 
 
Has there been any communication with the vendors in connection with this list, and if there has, what have they told you?
 
Of course, they are not very happy about lowering prices. But I think we are in a good [position] because we are discussing with Bruker all the time … and we are in close discussions with GE for their DIGE technology.
 
In 2006 when HUPO’s world congress was in Long Beach, we had a special workshop only for the vendors because we were interested in their point of view.
 
Is there any way to prioritize the contents of the list?
 
We are still the HUPO Brain Proteome Project so we are looking for projects dealing with brain proteomics. That is still our aim … and the first [goal] is advancing knowledge and understanding of diseases. Therefore, we are also looking for other projects that are funded, so our consortium members are bringing these projects with them.
 
And of course, we are looking for some funding too. … Without funding, we can’t do a lot. I think for a voluntarily driven initiative, we have gotten very far, but if you would like to do another study, you need essential funding, and that would be the next step.
 
Can you provide an update on the brain project itself?
 
At the moment, I think we are ending with defining the SOPs, defining what is the starting point. And the next point is the master phase … to look at which groups can learn these standards for the master phase and which type of samples can be used.
 
I think this is closely related with funding from EU or funding from HUPO because here you need money for the sample handling. Normally, you are using a biobank but then you have to trust it, or you make a new biobank.
 
What do you mean by the ‘master phase’?
 
The main phase for me is the phase after completing the pilot studies, after implementing the SOPs, guidelines, data reprocessing strategy, and after convincing the community. For the main phase we have to identify funding and suitable models or samples for neurodegeneration to analyze with our strategies. By having identified theses models and the available groups, the main phase will start.

File Attachments
The Scan

Not Yet a Permanent One

NPR says the lack of a permanent Food and Drug Administration commissioner has "flummoxed" public health officials.

Unfair Targeting

Technology Review writes that a new report says the US has been unfairly targeting Chinese and Chinese-American individuals in economic espionage cases.

Limited Rapid Testing

The New York Times wonders why rapid tests for COVID-19 are not widely available in the US.

Genome Research Papers on IPAFinder, Structural Variant Expression Effects, Single-Cell RNA-Seq Markers

In Genome Research this week: IPAFinder method to detect intronic polyadenylation, influence of structural variants on gene expression, and more.