At A Glance
Name: Ian Blair
Position: Director, Center for Cancer Pharmacology, University of Pennsylvania, since 1997.
Scientific Director, Genomics Institute Proteomics Facility, University of Pennsylvania, since 2002.
Background: Director of Mass Spectrometry Center, Vanderbilt University, 1983-97.
Lecturer in analytical chemistry, Imperial College of Science, Technology, & Med-icine, University of London, 1979-82.
Research Fellow in biology, Adelaide University, Australia, 1977-79.
Research Fellow in chemistry, Australian National University, 1975-77.
Post-doc in organic chemistry, Adelaide University, 1972-75.
Lecturer in organic chemistry, Makerere University, Kampala, Uganda, 1971-72.
PhD in organic chemistry with Derek Barton, Imperial College, 1971.
BSc in chemistry, Imperial College, 1968.
How did you first get involved with proteomics?
About four years ago, I decided that we really needed to build up proteomics at Penn, and so I started advocating for a proteomics core facility. Originally, it was within the context of the medical school environment. We applied for a proteomics core within a cardiovascular disease program and got it funded. Basically, it was using instrumentation that I already had in my lab, or using some instruments that were available within the medical school.
About that time, in 2001, the Penn Genomics Institute was founded, which was originally focused on gene microarrays and bioinformatics. The director of that institute, David Roos, asked if I would consider developing the concept of proteomics to serve the whole of the University of Pennsylvania under the auspices of the Genomics Institute. At the same time, the cancer center also felt the need for proteomics. Therefore, the [proteomics facility] went forward under the patronage of both the Genomics Institute and the Cancer Center. The Proteomics Core Facility was originally established within my own research lab. About six months ago we moved into 2,000 square feet in the Biomedical Research Building.
Were you already working in proteomics when you started the core facility?
I was originally interested in how DNA is modified during oxidative stress. From our research, I recognized that some of the structural motifs within DNA were also present within proteins. This is most obvious in arginine amino acid residues within proteins, which have kind of the same structure that you see in deoxyguanosine in DNA. Elaborating to think about whether the same modifications could occur originally in peptides and then in proteins, and then thinking that we need to think about this more globally, were the germs of why I thought proteomics was going to be important. Although some of the original concepts didn’t work out, we’ve identified some interesting modifications to proteins using the proteomics technologies that became available through the development of this core facility, and then of course an explosion in people wanting to look at plasma proteomics, and all those sorts of things. More recently [we’ve gotten] into quantitation, which is sort of what’s consuming us at the moment.
What instrumentation do you have at the core facility?
We have two primary ways of doing proteomics studies. We have DIGE technology using the Amersham CyDye system with the DeCyder software, which we use as a kind of screening tool for seeing up- and down-regulation of proteins in different settings in cells, CSF, and in plasma. So that is one platform. We have mass spectrometry-based systems, which range from quite simple MALDI-TOF to the LCQ Deca. And then on the upper end, we have an Applied Biosystems 4700 TOF/TOF doing MALDI-TOF/TOF, and an Applied Biosystems Q-STAR for doing LC-MS work. Very recently we acquired an LTQ, but not with the Fourier-transform component yet. We hope that some kind beneficiary would allow us to get into the FT business. We have an NIH high-end instrument grant under review and so we are keeping our fingers crossed that [it] might get funded [Editor’s note: the center received this funding on June 10]. We see that as the next frontier for us.
You were saying that you are getting more into quantitation — what are you doing in that area?
We’ve turned to using SILAC, which was developed in Matthias Mann’s lab originally. We’ve found it’s a very powerful way to do quantitation, and it addresses the question of, ‘how do you know that the spot you see changing on a 2D gel is really the protein you think it is?’ If you include a C13 standard, you can then monitor the changes and see if there’s a correlation in that way. I think this can be elaborated into a number of different settings for doing quantitation in CSF, in plasma, and certainly within cells. We’re aggressively pursuing that methodology right now.
Why did you choose SILAC in particular for labeling?
I have been working on quantitative analysis for 30 years now. Using stable isotope dilution represents the best way you can do it, and that is because of three factors. One, if you have a stable isotope analog, you do not have to correct for recoveries during these complicated procedures. The second thing is, you can use the ratio between the labeled analog and the unlabeled [one] to do quantitation. By analyzing standards it is possible calculate the actual amount of your particular compound. What is not really recognized that much is that they also act as carriers, so if you get selective losses of compounds during these isolation procedures, the stable isotope analogs will act as a carrier through these extensive work up procedures, and prevent non-selective losses. There is no other way of doing it actually. It seems like this represents the state of the art in quantifying small molecules, so applying that to macromolecules is very sensible. In fact, this is the kind of technology we use [to] quantify modifications and changes to DNA.
What in particular are you trying to quantitate now?
We have a number of projects. The theme is oxidative stress. So for example, looking at oxidative stress in breast cancer, and seeing how protein expression is modulated in in vitro models of breast cancer. What we’re doing is seeing how protein expression differs when you grow cells in two dimensions, compared with three dimensions, as a model for what’s occurring in vivo. That’s providing specific targets for us to eventually look in samples from breast cancer patients to see if we can use that as specific biomarkers. We’re doing similar things in pancreatic cancer, where we’re looking at protein expression in people with pancreatic cancer compared with benign disease of the pancreas. [It’s] the same idea of using stable isotope dilution methodology to quantify proteins. We’re [also] looking at protein expression in leukemia cell lines compared to normal, again with this eventual idea of going to patients.
What requests do you most often get at the facility?
I think mainly, people tend to focus on particular proteins they’re interested in — seeing whether they’re there or whether they’re regulated. We haven’t up until now been inundated with someone asking us to do the proteomics of a brain sample. They tend to be much more focused questions — they want to see how proteins that they’re interested in are changing. Or another thing that we get involved [with] a lot with is what proteins within a particular protein complex can we identify. A lot of our work is straightforward protein biochemistry where people want to identify proteins. And then once they know what they are, then they want to quantify them, so they’ll then ask to do the same thing using 2D gels. As the stable isotope methodology emerges, I suspect that will be something that gets lots of attention.
Do you have any collaborations with companies?
We have a collaboration with [Eli] Lilly, and we have one with Thermo [Electron]. With Lilly, the general area is in neurodegenerative diseases, which is best described as a biomarker proteomics approach. [The] Thermo [collaboration] is more from an instrumental aspect.
How are you attacking the post-translational modification problem?
We are using all the power of the instrumentation in trying to solve it. I think our role at Penn has been to try to focus people on the specific question they want to answer. Otherwise it becomes too global. The other thing we have been doing is, when people have done a gene microarray analysis, [they want to] see whether those proteins have also been up-regulated. So [we use] the microarray information to focus on particular proteins.
I see three ways that we go forward. We use the microarray methodology to see what proteins to focus on, and 2D gels to get another sense of proteins that we may be interested in. Then [we use a] sort of shotgun approach to see if we can pick up additional proteins that we may be interested in. I think with the shotgun approach, you are never quite sure if you’re looking at a pre-protein, the protein, or a protease fragment. So we’re also trying to go in a 3D LC direction using molecular weight analysis of the front-end, before doing a protease digest. That is not without its hazards, because you have got to disrupt protein-protein binding or else you can get very misled as to what the molecular weights are. Actually, we found that using molecular weight methodologies is very good for pulling out protein complexes. In other words, [the fact that] something will elute from a gel filtration column in the low molecular weight range so then you can examine what proteins are bound in that complex using immunoprecipitation procedures. This approach has turned out to be quite successful for some of our research projects in functional proteomics.