Skip to main content
Premium Trial:

Request an Annual Quote

Helmut Meyer Talks About Mapping the Human Brain Proteome

Premium

At A Glance

Name: Helmut E. Meyer

Age: 53

Position: Professor, Medical Proteom-Center, Ruhr-Universität Bochum, Germany

Coordinator, Human Brain Proteome Project, Germany

Background: PhD, Ruhr-Universität Bochum, 1974-1976. Sequenced isoforms of pig lactate dehydrogenase.

Postdoc, Ruhruniversität Bochum, 1976-1977. Analyzed phospho amino acids.

Researcher, Roha Arzneimittel, Bremen, Germany, 1977-1978.

Diabetes Research Institute, D sseldorf, Germany, 1978-1984. Developed system to study human insulin receptors on red blood cells.

 

How long have you been in Bochum, and how is your group equipped?

In 1985, Professor Heilmeyer at Bochum, with whom I had studied as a postdoc, offered me a position to lead a laboratory for protein analysis. At that time the lab had been running for just a year. We had one Edman sequencer and nothing else. Since then I have been building up the laboratory to its current state. In 2001 we renamed it Medical Proteom-Center, and last March, we moved into a new building. There are 25 people in the group, including the students, and it is still growing.

We do 2D gel electrophoresis using the IPG gel system from Pharmacia, but we also have large gels with an ampholyte system developed by Joachim Klose in Berlin installed. We do amino acid analysis with an AccQ-Tag system from Waters, and we have an Edman capillary sequencer from ABI.

On the mass spec side, we have one Bruker MALDI UltraFlex instrument coupled with several HPLCs. We run LC-MALDI with 75 micron capillary columns and separate the effluent directly onto the MALDI target. We also have two LC-MS/MS systems with an Ultimate system from Dionex. For the back end, we have a Finnigan LCQ — one of the classics, an older instrument that is four years old — and a newer LCQ-XP that is one year old. We will get a third one early this year. From Bruker we have an FT-ICR instrument — an Apex 3 — which we have had for almost two years. At the beginning of the year we will also get a Q-TOF instrument with a nano-HPLC. Finally, we have a big computer with 128 CPUs to calculate the masses of the flood of data.

What is the focus of your research?

Our main project is the Human Brain Proteome Project [a collaboration funded by the German Ministry for Education and Research], which I coordinate. We do not only study the human brain but also the mouse brain — a couple of mouse models for different neurodegenerative diseases. What we are creating at the moment is something like a protein atlas describing the content of the human and mouse brain, using different formats. We use 2D gel electrophoresis, but also 2D HPLC, either cation exchange separation followed by reverse phase, or anion exchange separation followed by reverse phase, or reverse phase alone. This gives us about 200,000 MS/MS data sets from one sample. We are also planning to divide the brain into about 20 different compartments to analyze subproteomes. The fourth setup strategy is to isolate distinct structures from neural tissue like dendrites, pre- and post-synaptic vesicles, et cetera.

We will have a huge amount of data available, which will all be stored in our database. Doing all the LC-MS/MS analysis only takes 20 percent of the time — 80 percent is spent afterwards when somebody has to sit there and judge the data and validate what is correct and what is incorrect. That’s manual, and a lot of people have to sit in front of their screen to look at the data.

Our plan is to make the database available to a HUPO initiative. I think we will start by mid-2003 to make the data available to the public.

What do the other groups contribute to the project?

Protagen [a company that I co-founded] is doing all the MALDI mass fingerprinting. It also provides the infrastructure for the protein database. Microarray data is created by Scienion, a company in Berlin; MicroDiscovery, also in Berlin, analyzes the microarray data. We also have a group in Kassel that studies protein-protein interactions using Biacore technology. They developed a new affinity tag based on a cAMP binding domain. Moreover, three groups at the Max Planck Institute for Molecular Genetics in Berlin are mainly preparing recombinant proteins, all from a human brain expression library. Joachim Klose also belongs to our consortium; he has been improving his type of 2D gel electrophoresis and analyzing all the 2D gel images from the mouse models. There is a standing collaboration with a group at the Fraunhofer Society in Stuttgart whose members are specialists in membrane technology. We are thinking of making the 1st dimension [of 2D gels] with pre-made gels, which involves wrapping [them] with special membranes.

How can this German initiative tie in with future HUPO initiatives in neuroproteomics?

We received the money for the Human Brain Proteome Project mainly to set up and improve the technologies. We already have some nice results, and I have an oral commitment from the [German] ministry [for Education and Research] for renewed funding when our current grant runs out in mid-2004. Having this commitment, Joachim Klose and I told Sam Hanash that we would be able to coordinate a HUPO Human Brain Proteome initiative.

This project would look similar to the HUPO liver and serum protein initiatives. There will be coordination on which tissue samples should be analyzed, which part of the brain should be analyzed, what standards should be used for the quality and preparation of the tissues, and which kinds of measurements should be made. How will the data be structured, and how can we make it available to the scientific community?

Right now, I am just starting to contact a lot of people. Presumably we will officially start the initiative by the end of May. We have an international congress on protein expression and protein function, organized by the German Society for Proteome Research, in Berlin May 26-28, and we would like to start it at this venue.

When did you found Protagen?

I founded it with two PhD students five years ago. We were in Bochum until the end of February and moved to Dortmund in early March last year. We now have about 1,000 square meters [10,000 square feet] of lab and office space and 17 employees.

Our main customer and collaborator is Bruker. We co-developed a spot picker and an automated workstation to digest gel samples and to prepare MALDI samples. The hardware was developed by Bruker, and the software, the procedures, and the wet chemistry were developed by Protagen. Bruker is selling this worldwide; they are paying our effort, and we get licenses for each instrument sold. We have another project with Bruker, bioinformatics software. It takes raw data from mass spec instruments and transforms it into a unified data structure. Then it can be used to trigger the search engines, and the results will be stored in distinct fields and are available for further data mining and data management. We started this project three years ago. The first product is ProteinScape, which has been on the market since the middle of last year. We have programmed the software in such a way that it can take primary data from all kinds of mass spec instruments and 2D image analysis software packages which are sold on the market. We have also included interfaces for using all kinds of search algorithms for peptide mass fingerprinting and for MS/MS analysis.

As a company, we also belong to the Human Brain Proteome Project, and we receive about €1.8 million as a grant from this project. As a company, we have to invest the same amount of money. We invest the income we get from the Bruker deal back into this program.

What is your position in the company, and how do you split up your time?

I am the vice president and CSO of the company, after stepping down as CEO last year. Most of my time I spend at the university. Once a week I am in the company for a couple of hours, but that’s normally all.

Where do you see a need for technical improvements in proteomics?

The most important point is to do proteomics quantitatively and to repeat experiments. I think the main concern about today’s research is that many people really believe, having so much data from one single experiment, that there is no need anymore to repeat it. That’s not only specific for proteomics research, but the same is true for mRNA profiling. That’s why you get so many so-called target genes or target proteins. Only repetition and a good strategy to follow up will allow you to cut the number of potential targets down to a manageable number. For instance, we published a paper in Molecular and Cellular Proteomics in May, together with Joachim Klose, about a Huntington mouse model, where we found that only three proteins decrease [in abundance] during Huntington’s disease, and not 500. They did each experiment with eight independent repetitions.

Also, doing the image analysis of 2D gels is very tedious nowadays, and there is a great need to make this quantitative analysis much easier. We have just ordered the Cy-Dye system from Amersham Pharmacia. We have the hope that it will cut the time we need to do 2D gel electrophoresis and the image analysis down to 25 percent of what it is now.

What we also need are failure-free methods, like the ICAT technology or other non-radioactive isotopic labeling techniques, for quantifying the differences between two or three states of a proteome. What I have seen so far is that the ICAT technology really tells you only the amount of housekeeping proteins. It does not seem to have a very broad dynamic range.