Skip to main content
Premium Trial:

Request an Annual Quote

Catherine Fenselau on Labeling, Drug Resistance, and Detecting Bugs

Premium

At A Glance

Name: Catherine Fenselau

Position: Professor of biochemistry, University of Maryland, since 1987.

Background: Professor of pharmacology, Johns Hopkins University Medical School, 1967-87.

Research fellow in extraterrestrial geochemistry with Melvin Calvin, NASA laboratory at University of California, Berkeley, 1966-67.

Post-doc in mass spectrometry with Alma Burlingame, University of California, Berkeley, 1965-66.

PhD in chemistry, Stanford University, with Carl Djerassi, 1965.

AB in chemistry, Bryn Mawr College, 1961.

 

How did you get involved with proteomics?

For a long time I have looked for new ways to exploit mass spectrometry in biomedical research. I was at [Johns] Hopkins medical school for some years. We worked on drug metabolite analysis when it was hard to do, and we worked on quantitation of levels of drugs when it was hard to do. [We also worked on] interactions of drugs with proteins that inactivate or are targets for drugs, [and] analysis of proteins that are drugs. We were also interested in the interactions of drugs with two proteins that are thought to play a role in drug resistance: metallothionein and glutathione transferase. We thought proteomic strategies would be a good way to look at the changes that occur in eukaryotic proteins when the cells or the patient becomes resistant to the drugs they are receiving. The drug resistance pops up often when a patient receives a drug through long periods of time — so it could be anti-epilepsy, insulin, or anti-AIDS drugs, for example. We’re inte-rested in cancer chemotherapy, and in the process by which cancer patients and cells become resistant to these drugs. We were working in that area before the strategy of looking at many proteins at once was formulated. So it was a no-brainer to apply the new strategy.

So how does drug resistance work on a proteomic level?

We’re looking at the changes in protein levels from human breast cancer cells that are susceptible to drugs compared with four lines of cells that have been selected for being resistant, each of them to a different drug. So this is comparative proteomics. In comparative proteomics, we want to compare the amounts of large numbers of proteins in two systems, in this case two cell lines. We found out in our work already that many protein abundances change — in the literature there had been maybe half a dozen associated with acquired drug resistance — but the first thing we saw is, there are probably 100 proteins whose abundances are changed up or down in these drug-resistant cells. So that suggested [proteomics] is a good approach to the problem.

The second thing we found is that the protein profiles are different in the cells that are resistant to different drugs. It is important to find out if there are any proteins that are up-regulated in all these resistant cells, but it’s also interesting to see the proteins that are changed in a unique way for each of the drugs. The drugs we selected are all thought to work by different mechanisms, so we expect that the unique changes in proteins that we found might have something to do with the mechanisms of the drug.

So for every cell that becomes resistant to a particular drug, is its protein profile going to change in the same way?

I think there’s probably a large variation. The selection is not monoclonal. So we’re looking at averages. But at least our profiles are reproducible across different cultures and different generations.

So then you looked at different ways to quantitate the changes?

In eukaryotes, the acquired drug resistance is usually not thought to result from mutations — that is, changes in the sequence of the proteins — but [instead from] changes in expression levels. So if one accepts that it’s important to be able to compare the protein changes, we’ve been evaluating a number of approaches to quantitation. Some of them are going to be more appropriate to real clinical samples, and some are going to [be] more appropriate to cell culture, and some of them have greater accuracy than others — these are the kinds of things we’re evaluating. The first method is densitometry — automated densitometry from 2D gels. It’s an old way but still a good way. It’s probably reliable 90 percent of the time.

We have also looked at two isotope-labeling methods. As a long-time mass spectrometrist, I was ecstatic when we were able to bring in the thing that mass spectrometry has done best historically, which is to measure isotope ratios to this type of problem. We had introduced a method for introducing O<sup>18</sup> into the carboxy terminus of tryptic or other purine protease peptides. That’s one of the methods we continue to evaluate. That’s compatible with the shotgun strategies that John Yates advocates. We also use metabolic labeling, where the cells are grown in the presence of isotope-labeled amino acids. We’ve had the best success with providing arginine and lysine-carrying six C<sup>13</sup>. This again means that every tryptic peptide has a label in the carboxy terminus and so we have a simplicity of patterns — we know what the mass difference is going to be between the peptides in the two cell lines we are comparing. It’s not an idea we originated in our lab but we think it’s an idea very compatible with cell culture. It won’t work, of course, with clinical samples because we can’t grow our patients in isotope-labeled amino acids! And then there are two or three other methods that have been proposed that involve labeling, usually after proteins are isolated from the cells. We haven’t had a chance to evaluate everything.

What preliminary conclusions have you drawn in regards to the labeling methods?

Metabolic labeling is very good for cell culture work. Proteolytic labeling, as we call our O<sup>18</sup> method, would be good for clinical samples. But one of the values of 2D gel electrophoresis is, many people know how to do gels whether or not they know how to do mass spectrometry. So it’s reassuring to me that that works pretty well, with the problems everyone knows about being that there might be more than one protein in a spot, and we lose very basic proteins and hydrophobic proteins. Those are limitations of gels, not of comparative densitometry.

What other ongoing proteomics projects do you have?

We’re part of a multi-institutional team funded by DARPA to develop a mass spec-based system for rapid analysis of airborne microorganisms. This might be used for monitoring buildings or the subway or something. We’ve been working on methods for sample preparation and how to read the mass spectrum. We believe the most flexible way to deduce the species of bacteria from the mass spectrum is to use a proteomics and bioinformatics interpretation. If we identify proteins that are predicted by the genome, then we don’t have to worry about reproducibility — we’re not trying to fingerprint, we’re relating our spectrum to the genomes or protein sequences of the pathogens. We don’t even have to see the same protein each time, so long as we see a protein that can be associated with the genome. We actually just published a paper where we advocate this approach for mixture analysis. Protein databases have already read the genomes into proteins and put the expected molecular weights into databases.

There are a couple of approaches we’ve used for microorganism analysis, but the simplest one is, we have developed a way to do a very rapid trypsin digest in the mass spectrometer sample holder in a couple of minutes, and then those peptides — some of them, not all — can be sequenced either by tandem mass spectrometry or by curved field reflectron time-of-flight, and then that sequence information is used to search prokaryotic protein databases to identify the protein and so, the microorganism. The trick is to carry out the tryptic digestion rapidly and also to try to produce a limited number of peptides so the spectrum wouldn’t be too busy.

So the vision is that the machine would sample air and identify the microorganism right there?

Yes. But one of the challenges is making the identification against a background. In my neighborhood they spray Bacillus thuringensis to kill web worms, but that’s the first cousin of bacillus anthracis [anthrax], so that’s a background for air analysis of microorganisms. That’s why we’ve been working hard on strategies for analyzing samples of microorganisms in mixtures.

How long before we have a machine that can identify anthrax from the air using proteomics?

There’s some mass spec-based systems being used currently to monitor air — they don’t use the proteomics strategy that I’ve introduced, but perhaps they will. So it has to pass the engineering test: Can it be made rugged? Can it be done automatically without a scientist there? Will it work in the desert at 120 degrees? Those sorts of things. I think it could be implemented in two years. The hard parts of it are done — the air collection and mass spec that will run by itself for a week.

With whom are you collaborating on the microorganism detection project?

With [Johns] Hopkins Applied Physics Laboratory, with Hopkins Medical School, with scientists from the TNO laboratory in Delft [the Netherlands], and some computer scientists from other universities and national labs. This current effort initiated in 1995, so it precedes our current national concerns.

What major problems do you think exist in proteomics research right now?

I think we oversold it in the beginning, so there’s a lot of investor burnout. A lot of the startups have closed or downsized substantially. I think we need to back up and put it on a more intellectual basis.