Skip to main content
Premium Trial:

Request an Annual Quote

Vanderbilt's Liebler on Developing Standards for Shotgun Proteomics

Premium

Daniel Liebler
professor, departments of biochemistry, pharmacology and biomedical informatics
Vanderbilt University School of Medicine
Who: Daniel Liebler
 
Position: professor, departments of biochemistry, pharmacology and biomedical informatics, Vanderbilt University School of Medicine, 2003 to present; director of proteomics, Mass Spectrometry Research Center, Vanderbilt University School of Medicine, 2003 to present; director, Jim Ayers Institute for Precancer Detection and Diagnosis, Vanderbilt-Ingram Cancer Center, 2006 to present.
 
Background: director, Southwest Environmental Health Sciences Center, University of Arizona, 1999-2003; PhD in pharmacology, Vanderbilt University, 1984; postdoc work in biochemistry/biophysics at the University of Oregon, 1984 to 1987.
 

 
Last fall, the National Cancer Institute handed out $35.5 million to five teams to evaluate and develop proteomics technologies for cancer research. One of those teams receiving a portion of the grant is headed by Daniel Liebler, who will use his $7.5 million, five-year funding to explore ways to reduce the variability of results achieved by shotgun proteomics and to increase throughput.
 
At the Association of Biomolecular Resource Facilities annual conference this week in Tampa, Fla., Liebler presented some of the research his team has done as part of the NCI grant.
 
ProteoMonitor first chatted with Liebler in 2003 when he was about to leave the University of Arizona for Vanderbilt. We caught up with him at ABRF to discuss his work. Below is an edited version of the conversation.
 
During your talk today, you said that shotgun proteomics revolutionized cell biology. What did you mean?
 
Shotgun proteomics, by providing a method of identifying proteins and small complexes or medium and even large complex proteomes, has allowed for the identification of protein-protein interactions that were unknown. It has allowed the site-specific mapping of modifications. Previously, the only route to that was antibodies that were putatively site-specific, but that's always been a technical hurdle.
 
So, a large fraction of the known protein-protein interactions now and interactomes like in yeast and C. elegans have been documented or discovered even by shotgun proteomics and using pull-down experiments and shotgun inventory of the pull-downs.
 
There were those two big papers … from the Cellzome Group and Anne-Claude Gavin's group [at European Molecular Biology Laboratory] that used essentially pull-down and shotgun identifications to inventory protein-protein interactions on a large scale in yeast. And that work's been continuing. And that complements yeast two-hybrid stuff. That has been a tremendous tool for discovering protein-protein interactions that weren't known.
 
I know that if you look at Nature, Science, Cell, Molecular Cell, other sort of top journals in cell biology, you see a lot of work using shotgun proteomics to identify partners and proteins that were pulled down.
 
Can you give us an update on the work you're doing now with the NCI grant?
 
The NCI grant is part of their clinical proteomics technology initiative. These awards they have given to these consortium partners, the five partners, have been focused on standardizing, improving, and harmonizing mass spec-based technology for shotgun proteomics. So this really is focused almost entirely on mass spec-based technologies.
 
When we decided to reply to this RFA, we thought that there was an opportunity to drive shotgun proteomics into clinical application, but that it needed standardization, improvement, and refinement. I think the technology has proved itself as a valuable tool for basic biology. But it has never been necessary to apply it in a very standardized fashion to be successful in cell biology and biochemistry.
 
But to analyze dozens of tissue samples in a series of studies to try to identify a putative cancer marker, you've got to be able to document that you've got a reproducible technology platform and that the differences that you see are not due to a variation in the performance of the platform, but that they're due to biology.
 
So that's what we want to be able to show. And I think the key with the NCI program is to be able to show that when we say that we detect things that are different or that we quantitatively measured candidates, those measures are reliable, or that their variation can be clearly defined and used to assess the degree to which any of those changes are due to the disease as opposed to variations in the technology.
 
When you say you want to standardize the technology, are you talking about the instruments themselves, or the methods of doing things, or both?
 
All of those things, and more. Standardization and quality control can be assessing the characteristics of a good plasma sample. You can have a procedure for preparing plasma, but a nurse or a laboratory technician might not follow your procedure on a Friday afternoon, or when they take a long lunch break, or whatever it might be.
 
How is it that you're going to know that your sample is a bad plasma sample? What are the characteristics of a bad plasma sample? It should be possible to come up with measures of plasma samples. For example, we're interested in simple dilute-and-shoot MALDI analyses, not to identify anything, but to just assess patterns that would be indicative of variability in the quality of plasma samples.
 
So that's an example of using mass spec-based proteomics just to assess the first phase of an analysis. In the consortium, and certainly in the Vanderbilt team projects, we're talking about standardizing digestion conditions again using tagged substrates to verify the degree of digestion or completeness of digestion. We also want to use peptide separations, of course, that can be standardized, where we can standardize the performance of the separation — how many different peptides per fractions, for example. We can have metrics for that.
 
And then, of course, standardizing the informatics platforms and the tools and the assumptions that you use — which databases are searched, what percentage of false positives are you going to allow, estimated based on reverse database searching.
 
And then introducing things like protein parsimony and having very well-documented methods for doing all those steps, so that when somebody gets your dataset — say you've identified 3,512 proteins on average in 100 polyp samples from a colon — that somebody can know exactly how they would need to do that in order to try to reproduce your work.
 
Are you working with HUPO's PSI initiatives, or because they're really working on general proteomics standards, they don't really have anything to do with what you're doing with cancer?
 
There is some overlap in people involved in HUPO's initiatives and the NCI program, but there's no formal relationship. The NCI program does have formal interaction with the National Institute of Standards and Technology and is helping to prepare standards that the group is sharing and using for some comparisons of platforms. There's also collaboration with Argonne National Laboratory to prepare labeled intact protein standards for dilution and detection limited type experiments.
 
The lack of raw data has been a real issue with researchers. Are you finding that as well?
 
We're generating our own data, so all the data that will be analyzed as part of the [NCI] program will be generated by the [NCI] teams. Some of this work will be individual samples collected at the individual institutions done by variations of methods and platforms. And then some of it will be shared samples that we've all agreed on what their composition will be, how they'll be prepared, and then we will do cross institutional studies. In some cases, we'll actually use the same type of mass specs.
 
For example, we're planning a study where we will do shotgun proteomics comparison on, for example, ion trap MS instruments, Thermo LTQs. So everyone will use the same instrument with the same SOP with the same sample. We would like to address the question, what is the inter-lab variation with a defined protocol and the same instrument?
 
If you're trying to create standards with the instruments, are you working with commercial vendors? What are their roles in this?
 
We have had discussions with a couple of commercial providers of protein standards. And those discussions are ongoing. We have [agreements] in place to discuss using these, and some of the protein standards will be produced with participation of some commercial company. Vendors will generate these. I can't go into any more details, but we will be relying in part on commercially available products to generate standards that we'll use for our comparisons.
 
What about the NFCR grant? What kind of research are you doing with that?
 
The National Foundation for Cancer Research provided a [grant] to Vanderbilt that's focused on proteomics and drug action. And there, the problem is not discovering biomarkers for early detection, but more applying novel technologies to enable proteomics approaches to discover targets of anti-cancer drug actions. So many anti-cancer drugs still have unknown or incompletely characterized targets.
 
This brings together Larry Marnett, a professor of biochemistry and chemistry who's really developing novel drug analogs and affinity probes. We're using mass spectrometry based proteomics to identify the targets of those drug probe molecules. And then Richard Caprioli's group is using MALDI based tissue analysis to extend that effort to investigate the actions of drugs at the tissue level.
 
So that's very different than the NCI program. It's a somewhat smaller, more focused program.
 
When we spoke a few months ago, you said that you didn't believe that protein arrays could generate reliable or meaningful data. What did you mean by that?
 
I think that protein arrays for discovery proteomics have been shown to be quite limited in their ability. In other words, antibody arrays, on which I am not an expert, but I've seen what's in the literature and I've heard a lot of talks about antibody arrays, and I realize the biggest limitation is the availability of high quality antibodies that … perform well in an array format.
 
There aren't as many antibodies that can be arrayed as there are proteins that can be detected by shotgun proteomics. I think that's a limit of antibody arrays. Now, one good thing about array technology is the so-called reverse-phase arrays. Again, you're dependent on the quality of the antibody reagent, but here's a method that allows you to probe in any tissue sample, or other types of samples, for the presence of things that you have a good reagent directed against.
 
So part of the Vanderbilt CPTAC team is a collaboration with Gordon Mills at MD Anderson to compare the use of reverse phase arrays using antibodies in Gordon's lab with targeted MRM [Multiple Reaction Monitor] LC-MS in our lab to compare the performance of reverse-phase arrays for proteins with MRM for the same proteins in the same samples.
 
There's a lot of research being done in cancer with proteomics. Are there any projects that excite you?
 
I must say I'm very excited about our program at Vanderbilt … because we really do sort of have the best confluence of resources. We really have an outstanding mass spectrometry resource. The mass spectrometry research center at Vanderbilt has almost 40 mass specs, which is really remarkable by any standard.
 
But we also have this Ayers Institute initiative which I'm directing, which represents a major shot in the arm for really developing proteomics platforms for clinical proteomics in cancer. And on top of that we've got the NCI grants. So in terms of the resources, we've got a great critical mass of people, everything from analytical chemistry people to proteomics people, informatics staff, applications developers, bioinformatics, researchers, and so forth. So we've got all of the key players we need to have a major impact.
 
I'm excited about the NCI clinical proteomics program. I think that program really did identify many of the best people in the country with an interest in clinical proteomics. Certainly, there have been outstanding proteomics people who aren't involved because they haven't chosen to go in that direction, but Steve Carr and his group [at the Broad Institute], and that team including Mandy Paulovich [at the Fred Hutchinson Cancer Research Center] and Leigh Anderson [at the Plasma Proteome Institute] and others, I think, they're going to do as well as anybody will ever be able to do on MRM or blood proteins. They're perfectly equipped to do that.
 
They've got some major technical hurdles to solve, but I'm delighted with the prospect of collaborating with them. The MRM quantitation is also a minor part of the Vanderbilt initiative, and I'm really delighted that we can collaborate with the group at the Broad Institute.
 
I think Paul Tempst [at Memorial Sloan-Kettering Cancer Institute] is doing some very innovative work and he's been doing some of the most interesting work in discovery, serum proteomics, and that's an approach that's been largely disappointing and is largely disdained by the cancer research community at this point.
 
Why's that?
 
Well, I think the early discovery efforts suffered from a number of weaknesses from poor quality analytical chemistry and mass spectrometry to overly naïve approaches to informatics and interpretation of the data. And I think it led to a backlash that was every bit as inappropriate as the enthusiasm that first came out in the late-90s for this type of work.
 
Many people first heard that serum MALDI profiling was going to identify biomarkers for all cancers. And then it turned out the news wasn't as good as originally reported. Then came the backlash, and I think both of those interpretations were unjustified. The fact that we have some cancer markers that are blood proteins that are based in tumors already shows the approach can work. But you've got to have the right approaches to discovery, verification, and validation of candidate markers.
 
I think the people that are involved in this NCI program are some of the best people to address those problems. I think it's going to boil down, in the end, to good quality analytical chemistry.

File Attachments
The Scan

Not as High as Hoped

The Associated Press says initial results from a trial of CureVac's SARS-CoV-2 vaccine suggests low effectiveness in preventing COVID-19.

Finding Freshwater DNA

A new research project plans to use eDNA sampling to analyze freshwater rivers across the world, the Guardian reports.

Rise in Payments

Kaiser Health News investigates the rise of payments made by medical device companies to surgeons that could be in violation of anti-kickback laws.

Nature Papers Present Ginkgo Biloba Genome Assembly, Collection of Polygenic Indexes, More

In Nature this week: a nearly complete Ginkgo biloba genome assembly, polygenic indexes for dozens of phenotypes, and more.