At A Glance
Name: Bradford Gibson
Position: Professor and director of chemistry at the Buck Institute for Age Research, since 2000.
Adjunct professor of pharmaceutical chemistry at the University of California San Francisco, since 1985.
Background: Post-doc in peptide and protein chemistry and mass spectrometry at Cambridge University, 1983-5.
PhD in chemistry, Massachusetts Institute of Technology, 1983.
Worked at Stanford Research Institute, 1976-9.
BS in molecular biology and biochemistry, UC Santa Cruz and Santa Barbara, 1975.
Tell me about the Buck Institute.
The Buck Institute is the only independent research institute in the US devoted to aging research. Aging had often been seen as a kind of a crackpot field until recently. But that’s changed with genomics methods and model systems and going from anecdotal or not really great research shifting to really trying to get [a] good genomic, proteomic, and molecular basis for aging. We are not a big group — about 12 PIs, and our total size is 120 to 130 people, but we cover a lot of different disciplines and we try to work collaboratively on issues of the biology and chemistry of aging and age-related diseases. We have a genomics core, a mass spec and proteomics group — which I run — and also others: morphology, animal systems, etc.
What projects are you working on now?
One is fairly recent with another colleague, Julie Andersen [Associate Professor at the Buck Institute]. We got funding from the NIH to look at the molecular basis of oxidative damage to complex 1 in the mitochondrial electron transport chain as a model for Parkinson’s disease. It’s been known for a long time that complex 1 dysfunction is involved very early on in Parkinson’s. There are 45 proteins involved in this complex and we’re trying to understand what is the damage that occurs — we know that there is oxidative damage and we’re trying to use basic proteomics strategies to look at site-specific damage. We’re using a transgenic mouse that controls levels of glutathione, the antioxidant, so that you can lower levels of glutathione and then you see an oxidative defect in the function of complex 1. We’re now pulling out complex 1 and starting to map out all the proteins — particularly we’re interested in looking at some of the most sensitive oxidation sites like in cysteine oxidation, and we’re developing some stable isotope approaches where we come up with a method where we read off the redox state of certain individual cysteine residues, for example in these proteins, to understand if some of them are starting to be oxidized during glutathione depletion regimens. We gave a small report at ASMS [on some of the methods] and we’re trying to write up something now about the methodology.
Tell me about your methodology.
It’s not unlike some of the ICAT strategies that Ruedi Aebersold’s group developed — but you can use it for a totally different reason to simplify large proteomic mixtures by targeting cysteine residues. We come up with methods that can label cysteine residues — ones that, say, are in a reduced state — and then do some reduction and re-alkylate with another stable isotope now that we’ve changed the oxidation state of the cysteine back to reduced form. The idea is to label the cysteines so that when you do the proteomic analysis you have the stable isotope pair that will reflect the redox state of that particular residue. For example, you can imagine a cysteine that’s supposed to be totally in the reduced state and if you did this chemistry everything would show up okay in the non-disease[d state]. But let’s say 50 percent gets oxidized in the diseased state. Then you do our chemistry and you start to see [that] only a portion of the cysteine population would be available for this initial hit with the alkylating reagent, and then after you’ve done a more thorough denaturation reduction, then the other part would be available for reaction. So it’s basic protein chemistry, but it’s applying new reagents and using the old standbys of stable isotopes to help in identifying states of these amino acids.
We have a paper that came out in [the Sept. issue of] Nature Biotechnology on a chemoenzymatic method for determining phosphorylation sites in proteins that we developed with Kevan Shokat at UCSF. Kevan’s original idea was to change phosphoserine and phosphothreonine sites using standard kinds of beta-elimination chemistry followed by Michael’s-like addition to change these phosophoryalated residues into amino acids that look like lysine except that they have a sulfur instead of a methylene at the beta carbon position
The idea here is that it will now be cleaved by standard proteases that will recognize former sites of phosphorylation. There is no way to cleave specifically at phosphorylated sites — now we have a way. We’d like to develop these methods to go after the phosphoproteome.
The idea is that you can take good protein chemistry strategies and find out ways to use these chemistries to expand the chemical toolbox for proteomics. There was a lot of technology developed in the ‘70s and ‘80s for protein chemistry and a lot of these techniques are waiting to be revitalized and applied to proteomics strategies.
You were looking for biomarkers for Alzheimer’s ...
This was a project we were originally looking at with a company called MitoKor in San Diego, looking at mitochondrial dysfunction as an underlying cause of many diseases, including aging itself (see PM 1-21-02). I was still at UCSF at the time, [and] we were using a cell line called a Cybrid cell that was generated from human neuroblastoma cells that had its mitochondria selectively killed by treatment with ethidium bromide, and then you were able to add in mitochondria from human platelet cells so you create this hybrid — or ‘cybrid’ — cell where all the mitochondria were from human platelets with a constant nuclear background. If the mitochondria came from an Alzheimer’s patient, it had a slightly different phenotype than a neuroblastoma cybrid cell from a control group. The idea was to use this cell line to help identify biomarkers of the disease.
That technology turned out to be a lot trickier than anticipated — the cybrids weren’t as stable as we would have liked them to be, although MitoKor is still pursuing this. This was one of our first major forays into 2D gel technology, and we were humbled. We could only iden-tify about 100 proteins using 2D gels and we knew that there were probably between 1000 and 2000 proteins in the mitochondria.
The good part is that led to another project also with MitoKor and two more recent papers. We decided to go after and identify all the proteins in the human mitochondria. We were so initially frustrated by the lack of coverage that we got using 2D gel technology that we ended up going to a sucrose gradient fractionation scheme with 1D gels that was originally put forth by Rod Capaldi at the University of Oregon. We got nine to 10 different fractions that maintained protein complexes and then brute force analyzed 1D gels very aggressively — we did that with hundreds of thousands of tandem mass spectra and we identified about 650 proteins, which was about two to three times more than anyone else at that time had been able to do. We think we’re only about halfway, and we’re still looking at ways to get the other 50 percent of the proteome.
We liked the sucrose gradient fractionation method because it was specifically in regards to fractionating with maintaining protein complexes, as well as separating as best you could some of the different electron transport chain complexes — so [Capaldi] was able to separate to a large degree complex 1 from 2 from 3. We got intrigued with that ability and also his being able to get these large tough membrane complexes out and get them into gels. It’s not unlike what a lot of groups are doing with multi-dimensional chromatography.
We’ve also tried to do some of the MudPIT analysis that John Yates has championed. I think that may not work for overly complex mixtures. I think it works really well if you have 50 to 100 proteins, but if you’ve got hundreds of proteins then you just keep revisiting the most abundant proteins, and it doesn’t have a lot of depth. The sucrose gradient allowed us to separate things at the protein level much better.
I think in modern proteomics, the shift is going to be toward separating things at the protein level and thinking about protein complexes, protein interactions, all that kind of stuff. So if you go fishing in the proteome and you want to have depth, you’re thinking about pulling out protein complexes and protein machines. I can see the shift among some people whom I work with and colleagues that you’ve got to go after the proteins.
Where is proteomics going in the future? What techniques need improvement?
I remember back in 1982 when I was a grad student at MIT and there was an article in Science or Nature on the death of protein chemistry. It said with all the new molecular biology techniques, protein chemistry was relegated to the sidelines. One of the things that happened due to that was that there were 10 years where a lot of people didn’t learn protein chemistry. But now we’re at a point where protein chemistry is really needed again.
I hope proteomics won’t be the death of protein chemistry — hopefully as opposed to molecular biology it will revitalize the need for good solid protein chemistry that is lacking, and it’s lacking because we lost a whole generation of scientists, of people who were trained in classical protein chemistry. We have great people in mass spec and great people in bioinformatics, but when you talk about people who know weird side reactions or proteases that will cleave unusual residues or know about tyrosine chemistry — there’s almost nobody. And that’s a problem. So if I think there’s one area that needs to be pushed, it’s the return of chemistry to proteomics.
Hopefully there are grad students now who see protein chemistry as an interesting area.