At A Glance
NAME: Barry Karger
POSITION: James L. Waters professor of analytical chemistry, Northeastern University
Director of Barnett Institute of Chemical and Biological Analysis since 1973.
BACKGROUND: Professor of chemistry, Northeastern University, 1963-present.
PhD in chemistry, Cornell, 1963.
BS in chemistry, MIT, 1960.
How did you get interested in proteomics?
My career has been in separations for years. I had conducted research for more than 20 years in HPLC and was involved in the beginning in developing reverse phase materials and principles, hydrophobic interaction chromatography and so forth. We started with small molecules and then peptides and proteins in the 1980s, so we were very active in that area. In the mid ‘80s I started getting into capillary electrophoresis and was very active in the Human Genome Project of the 1990s. I developed separation materials that were used to sequence the human genome — capillary electrophoresis, replaceable matrices, things of this sort.
In the 1990s I felt that the most important separation problem was separating the [genome] fragments. As we move into the next decade, the most important separation problem is how we’re going to handle these very complex mixtures of peptides and proteins. In addition, I have done a lot of work in terms of interfacing separations with mass spectrometers. Part of that involves [interfacing] microchips with mass spectrometers.
What have you done in terms of microchips?
We were one of the early people to demonstrate that you could conduct electrospray off of chips. We developed and published the 96-channel device for interfacing to mass spectrometers. This was different than the Advion device in that it is all planar and moved one tip at a time but emphasized, as Jack Henion [CEO of Advion BioSciences] does, the advantage of being able to use each channel once so you don’t have to worry about carry over. We have a patent on coupling microchips to electrospray. The Advion [NanoMate] has licensed that patent.
We also developed early the idea of high resolution separations coupled to MALDI-TOF. We were doing a lot of capillary electrophoresis — this was back in ‘97 or so when the genome project was very hot. We showed we could get very high efficiency separations. More recently, we’ve been working on putting streaks down from HPLC preparations onto plates and we have an [ABI] 4700 TOF/TOF instrument here and we’re utilizing that. As we move to higher efficiency separations, faster separations, as the peak width becomes narrower, we need to begin to look at trying to have deposition systems that can handle these high efficiency fast separations
What is the advantage of coupling HPLC to MALDI?
The big advantage is that with this, you can not only do MALDI-TOF, but also MALDI TOF/TOF and MALDI Q-TOF, so we can do [both] MS and MS/MS. As people are well aware, the sensitivity of signals can be different for ESI and MALDI for a particular peptide. First of all, these two methods complement each other. Secondly, a big advantage of MALDI is that we decouple the separation from the mass spectrometer analysis. So we can do MALDI first, and then come back and decide at what positions we want to do MS/MS. For example if we’re doing differential expression studies with ICAT, we can quantitate first and see at what positions there are peptides that are differentially expressed. Then we can go back to those positions and do MS/MS for peptide identification.
We recently used the chromatography as a means of de-noising in the mass spectral domain. What we’re able to do is de-noise not only random noise but also chemical noise. We take a large number of m/z positions and follow each of these as a function of the separation time and then look at that and de-noise in that domain. We can also determine the chemical noise as a function of mass and as a function of chromatographic time. Having this information, we can then use that to de-noise in the m/z domain. So what we’re doing effectively is not only doing separation, but we’re using that separation to help improve the signal-to-noise level in the mass spectra, without any distortion of the mass accuracy. We’re trying to use as much integration of the separation with the mass spectrometer as possible.
In electrospray, one of the things that people are very interested in is the ability to identify and quantitate lower and lower amounts. We all know that electrospray is a concentration-sensitive detector under most operating conditions. So it’s clear for a given amount of ma-terial, the narrower the column, the more concentrated will be the band and therefore the better will be the detection level. So today commercial columns go down to 75 µm internal diameter. People have worked with narrower columns — for example down to 20 µm pack columns. But these have not been commercialized and there are several reasons, but a main one is, they are difficult to pack. Because as you go to narrower tube diameters, the back pressure goes up. So if you want to pack the packing you have to use very high pressures. So we have the idea — and this comes from our days in capillary electrophoresis — that the better approach for these very narrow columns is to polymerize the solution in the capillary and make the packing that way. So we have introduced 20 µm monolith columns that are made by simply putting the polymerization solution in and either heating or shining UV light to get the polymerization to go. We’ve showed that these are 10 to 15 times more sensitive than 75 µm [columns] for a given amount of material. In particular, using the commercial ion trap detectors, for example Thermo Finnigan LCQ, we are able to do MS/MS at the 5 to 10 attomolar level — that’s about a factor of 10 to 20 below the 75 µm [column].
The next exciting thing is we’ve just put in to purchase the new hybrid mass spectrometer from Thermo Finnigan. This opens up the possibility to do protein identification and quantitation at the zeptomole level. Now the challenge is to be able to handle the sample in such a way that you don’t lose it.
Are there any biological projects to which you are currently applying this technology?
Yes. Bill [Hancock, also at Barnett] and I are collaborating with pathologists from Massachusetts General Hospital looking at laser capture microdissection of breast cancer tumor samples — something like 5-10,000 cells. We’re looking at differential expression in using O16 and O18 as the isotope tags. The idea is to try to recognize or determine proteins that are differentially expressed in the different stages of breast cancer tumors.
There are many challenges to this. One of them is to develop the sample processing approaches — to be able to optimize things such as dissolving or getting cells off the plastic cups and being able to do digestion properly. So we’re developing the technology to be able to do quantitative differential expression with low numbers of cells. When you have laser capture microdissection, you have low numbers. And with these low numbers, we want to be able to have high sensitivity to see low numbers of proteins. [We want] to try to find markers and to be able to characterize well the different stages of the tumor. Here with the LCM of course, you can pick out the cells that have been stained in a certain way, so you can have a more homogenous sample.
Are you collaborating with a number of biologists?
That’s just one example. We have collaborations with industry, for example a collaboration with a company called Cytyc (see PM, 6-6-03) that’s looking at analyzing ductal lavage for breast cancer, to do proteomics of the fluid in the duct — for example for families to try to discover markers that would be indicative of prognosis. We have collaborations both with industry and with the medical community.
In terms of technical developments, particularly in proteomics — where are there still opportunities for further improvement?
If you ask any leading proteomics researcher today, they would agree that we’re not there yet. There are still a lot of opportunities. Let’s take another example: the plasma proteome. HUPO is working in this area and some of the pharma companies are too. There are issues such as removal of highly expressed proteins — how are we going to do that without losing the low levels? How do we concentrate the low levels to be able to see them and quantitate them?
When I was in DNA sequencing, it was clear that there was a lot more information in the sample than we were extracting. We focused on trying to increase read length. If you can read 200 more bases, you’re getting that much more information in each sample. So the same thing applies here. When anyone does a whole proteome analysis, we’re not analyzing all the components in the mixture — far from it. The question is, how can we manipulate the sample in such a way that we can get maximum information?
To take another example, we really have not proceeded so far in terms of the glycosome. Thirty to forty percent of all proteins are glycosolated. And the extent of glycosylation, at least in some cases, [is] involved in disease. So these are examples of things where we have a long way to go. In many respects, it seems to me we went through this in the genome era — in the early 1990s it wasn’t clear how the genome was going to be sequenced. We had to look at a lot of different technologies. Then it turned out it was a separation method with outstanding instrumentation and automation that allowed this to be done. So, here we have a tremendous improvement in activity and mass spectrometry, but we need the front end in order to help manipulate or sim-plify the mixtures so we can really use the mass spec to its fullest.