Skip to main content
Premium Trial:

Request an Annual Quote

Hanno Langen About How Proteomics Can Impact Drug Development

Premium

At A Glance

Name: Hanno Langen

Age: 44

Position: Head of proteomics initiative, scientific expert, F. Hoffmann-La Roche, Basel, Switzerland

Background: PhD, University of Zurich, 1979-1988. Studied DDT-binding peptides.

Postdoc, Rockefeller University, working with Bruce Merrifield, 1989-1990

 

How did you get into proteomics?

I studied biochemistry in Zurich, working with Bernd Gutte, who is a specialist in peptide synthesis. At that time, he was working with DDT-binding peptides, which became the subject of my PhD thesis.

Then, as a postdoc, I moved on to Rockefeller University for two years, where I worked in the lab of Tom Kaiser. He had died just before I came, and Bruce Merrifield took over his lab. I worked on the construction of mutants of alkaline phosphatase where peptides were inserted into a surface loop — for example, somatostatin. We studied the interaction between this artificial protein and the corresponding receptor, in this case the somatostatin receptor, by changing the surface of the carrier protein. At Rockefeller, I also learned for the first time about mass spectrometry. I went to a meeting with Brian Chait, who was one of the first people using mass spectrometry for proteins and peptides, and I became quite interested in this area.

Then, in 1991, I got a position in protein analytics at F. Hoffmann-La Roche. We got our first mass spectrometer in 1992 — a triple quadrupole instrument that was capable of MS/MS sequencing. We started to do some protein characterization, the normal routine work you do in protein analytics, mainly characterizing recombinant proteins but also purified proteins. At this time, we were still fishing for proteins based on function, and I saw a need for being able to analyze mixtures. This was also the time when blotting to PDF membranes followed by N-terminal sequencing was performed, but very often the purity was not good enough, so I started to establish 2D gels and more separation technologies in the company. At this time, it was not yet called proteomics.

When did you embark on your first proteomics project?

I soon learned of the work of Bill Henzel at Genentech and others who were doing in-gel digestion and peptide mass fingerprinting to identify the proteins. We started our first project with our infectious disease department in 1995, when the sequence of Haemophilus influenzae came out. We were looking for new drug targets in mutant strains and had several publications coming out of this. It was a good experience for us, even with MALDI, which had quite a low resolution and low mass accuracy, compared to today’s instruments. But in a small genome, it was still feasible to identify the proteins. Had we started at that time with human samples, we would have failed: I know this in retrospect. But since we had this good experience, we moved on — we were not disappointed by MALDI mass spectrometry, like a lot of people were at that time. I think the project is still ongoing at a spinout company from Hoffmann-La Roche called Basilea Pharmaceutica.

At that time, a lot of people also started with LC-MS/MS technologies. John Yates, for example, was also working with H. influenzae, and there was a small competition [as to] who could identify more proteins from this bacterium using either 2D gels or LC-MS.

Who won the competition?

Up to now, there is no winner. My personal belief is that the technologies are very complementary. When we do in-house comparisons, we see proteins we identify with LC-MS/MS that we don’t identify with 2D gels, and vice versa. Sometimes there is a tendency that you get more proteins with LC-MS/MS technology, but with the 2D gels, you also have information about posttranslational modifications. There are pros and cons for both technologies. We have set up both now, and as far as I understand, John Yates also has both, though he is still more focused on LC-MS/MS, and we are more focused on 2D gels and MS.

How is proteomics organized at Roche?

We have small proteomics groups at each research site that focus on a specific disease area, for example in Palo Alto, California, in Nutley, New Jersey, or in Penzberg in Bavaria, Germany. These sites can also start bigger collaborations with the Roche Center for Medical Genomics in Basel, where I am located. We have several groups here, including a transgenic mice facility, a microarray group working with Affymetrix gene chips, a genotyping group, a protein analytics group, and the proteomics group, which consists of 30 people. There is a second proteomics research group in the diagnostic division in Penzberg — I am in the pharma division — which I also head in terms of the technology aspects, not the projects.

How is your proteomics group equipped?

We have 10 MALDI-TOF/TOF instruments from Bruker here in Basel. Besides, we have triple quadrupole instruments, ion trap instruments, and a Q-STAR. We also have 2D gel equipment from Amersham and some from BioRad, though we don’t have an automated 2D system. We have standard chromatography equipment from several vendors — HPLCs, nano-HPLCs, protein purification systems, centrifuges for subcellular fractionation. In addition, we have a whole line of automation for processing samples from 2D gels for mass spectrometry. This was one of the main reasons why we decided for the Bruker instruments, because they allow for easy integration with standard robotic systems since they use a microtiter plate format for their sample plates.

How do you decide to acquire new equipment?

First of all, we have to have a need for new equipment, like a new project or resources. The other driving force is that we can obtain results with these new instruments that we couldn’t obtain before. This could be, for example, dramatic changes in sensitivity. Then we go out and look at this instrument, and compare it to the instruments that we run now.

What roles do increased speed or throughput play?

Those are, of course, also a consideration. When we can increase the throughput, by buying one new instrument, by a factor of something, of course this is something we would have an eye on. Another aspect is how easy it is to integrate it into the existing workflow. The speed of the instrument alone is not sufficient.

How does proteomics fit into drug discovery?

I think it has a role in the very early discovery of targets, in the search for new pathways, where a target might be located, but also in the next steps, target validation, lead validation, and later in toxicology. It also plays a role in clinical studies, when we look for biomarkers, and in finding disease markers in the diagnostic field. It’s integrated in the whole process. It’s not only, as many people expect, that we now find magic targets with proteomics that we couldn’t find before with genomics or some other technologies.

What kind of impact has proteomics made at Roche?

I think it has definitely delivered, but at a time when it was not even called proteomics. For example, we could identify proteins, drug targets, on 1D gels or 2D gels, which made it possible to clone the genes for these proteins and then develop an assay to test their inhibition. We also have examples where we identified proteins based on function. For example, we had some compounds that had an effect on tumor cells but we did not know their targets, and we could identify these proteins.

Where do you see the greatest need for new proteomics technologies or resources?

I see a clear need to provide content for so-called protein chips, whether it’s protein chips or miniaturized ELISA technologies. We need more antibodies covering the proteome because with proteomics as it is today, we cannot work on many samples. Either you work on a few samples, and you try to go in as deep as possible and to cover the huge dynamic range of the proteins, or you just work on the top level proteins, and then you can start to do lots of studies. We need the development of different kinds of antibodies — monoclonal, polyclonal, and maybe also from different sources.

Where do you see proteomics going? Where do you see technical improvements coming from?

I think there will be an improvement in understanding what proteomics is. There is this mixture between this very big hype, “proteomics is a solution for everything,” and “proteomics is not working for anything.” The truth is somewhere in between. There are certain studies where proteomics is of great value, and there are also some studies where I would doubt the value because the efforts would be too high.

I think what will happen to some extent is miniaturization, so that you can use less sample. At the moment, we still have the problem that we use, in a lot of cases, too much samples since we have no equivalent of a PCR reaction in proteomics. The improvement I see is not really on the mass spec side: Mass spec is already able to go to the single molecule level of detection. Where I see improvements in the future is in linking sample preparation technologies with mass spectrometry. This is actually also where we have made most of our own developments in the last two years.

Where could proteomics have the greatest impact?

At the moment the discussion is going in the direction of biomarkers. If we can show that a certain drug hits a certain type of population, or if we have early response markers so we could shorten a clinical trial, that would be the ideal cost-benefit, and the shortest term of success. On the other hand, this is a very risky strategy. The risk is high, but the benefits could also be high.