Skip to main content
Premium Trial:

Request an Annual Quote

NCI Papers Describe Yeast Reference Standard, LC-MS Benchmarking Metrics

Premium

By Tony Fong

As part of its ongoing effort to evaluate mass spectrometry-based platforms for unbiased discovery work in proteomics, members of the National Cancer Institute's initiative on clinical proteomics last month published work meant to help researchers in the field better assess their instrument platforms.

The work is described in two papers published online by Molecular & Cellular Proteomics. In one study published Oct. 26, the NCI's Clinical Proteomic Technologies Initiative for Cancer details the use of the yeast Saccharomyces cerevisiae as a performance standard. The other, published Oct. 29, describes 46 performance metrics for LC-MS-based proteomics.

According to Daniel Liebler, a professor of biochemistry, pharmacology, and biomedical informatics at the Vanderbilt University School of Medicine, and the corresponding author on both studies, they grew out of a project to investigate the suitability of LC-MS platforms for the unbiased discovery of cancer biomarkers in tissue or biofluids.

"The key question there is: When you analyze sets of samples that correspond to cancer phenotypes … and you see differences, is that due to the biology, or is that due to the variability or instability of the analytical platforms?" he told ProteoMonitor this week.

In recent years, a number of other efforts have led to protein mixture standards for use by researchers as a way to benchmark their instruments and the quality of their own work. In 2006, the Association of Biomolecular Resource Facilities' Proteomics Standards Research Group developed a protein standard mixture of 48 proteins that was commercially launched by Sigma-Aldrich [See PM 02/23/06].

That same year, the Human Proteome Organization started it own effort to develop a 20-protein standard mixture [See PM 07/20/06]. It currently is negotiating with an undisclosed vendor to sell the mixture after Life Technologies' Invitrogen pulled out of an agreement to do so [See PM 10/09/09].

Closer to Experimental Conditions

But, Liebler said, he and his colleagues were interested in using as a standard a model system that would be more indicative of the kind of sample that is typically analyzed in biomarker studies.

"We wanted to model the performance or to study the performance of discovery platforms in a much more complex system that would be more like what you would deal with when you're dealing with tissues or biofluids from humans," he said. "There's just much more complexity, [and a] much broader range of concentrations of proteins because it's from a living organism."

The work on the yeast standard began in November 2006, shortly after the CPTC awarded five teams of researchers a five-year, $35.5 million grant to evaluate proteomics technologies applicable to cancer research. That initiative is called Clinical Proteomic Technology Assessment for Cancer, or CPTAC.

When Liebler and his fellow researchers began their work, they began with a 20-protein mixture prepared by the National Institute of Standards and Technology for analysis.

[ pagebreak ]

But a few months after that initial test, which resulted in a high variability of results, yeast was chosen as the basis for a standard. The choice was based partly on earlier work that had measured the concentrations of a large fraction of its proteins.

"The yeast offered to us a concentration-annotated proteome, so we could basically look up the numbers in the back of the book, if you will," Liebler said.

As the only concentration-annotated proteome in a higher organism to date, yeast also provided the researchers a way to evaluate another important component of LC-MS platforms: their ability to detect proteins at different concentration levels.

Because the concentration of so many of yeast proteins had been measured, the organism offered "a ground truth" by which to test the platforms' ability to detect the concentration levels of proteins that had been spiked into the yeast standard.

The first pilot study looking at yeast was conducted in late spring 2007. "At that time we spent about half a year working through various options to standardize an analysis method that all the groups could use," Liebler said. A designated SOP was used and attempts were made standardize to a greater extent LC-MS settings. "There were a number of changes that had to be made. Some labs had to upgrade software," Liebler added.

However, high variability was observed among the labs in the peptides they were able to identify.

Another study followed again with yeast in which a new SOP was adopted and some samples were spiked with bovine serum albumin. In the key study that followed, a series of samples was generated in which a mixture of 48 human proteins from Sigma Aldrich were spiked into the yeast standard in order to evaluate each lab's ability to detect protein concentration levels. That study also included a newly tweaked SOP.

The same study was then repeated but essentially with no SOP in order to compare results.

According to Liebler, improvements to the SOP were continually made along the way with many to do with mass spec tuning parameters. The biggest changes, though, focused on chromatography.

"That was clearly … the greatest source of variability," he said, adding one of the key messages of the yeast study is that "chromatography is one of the hardest things to standardize and control and has a huge impact on the performance of the system."

Troubleshooting LC-MS Platforms

Liebler and his co-researchers in the working group also applied the set of 46 metrics developed by Paul Rudnick, Stephen Stein, and others at NIST to monitor the performance of LC-MS systems. The metrics fall into six classes: chromatography, dynamic sampling, ion source, MS1 signal, MS2 signal, and peptide identification.

"The fact that we were doing a series of studies that were trying to standardize and evaluate the performance of the systems and identify sources of variation … made it obvious as an opportunity to apply the metrics," Liebler said.

One of the potential key benefits of the metrics work is that it represents a way other than peptide identification for proteomics researchers to evaluate the performance of their instruments.

"In proteomics, the only kind of outcome measure people have is peptide and protein identifications, and we felt that that's an inadequate measure of the performance of the system because if that number decreases it could be due to many possible causes," Liebler said. "Even if … you eliminate the biological [causes] and maybe even the pre-analytical [causes], the same injection-ready sample [can give] many different results, which is what we saw in many different cases. It could be due to variations in the performance of the chromatography, the mass spec, the data analysis."

The NIST group identified more than 100 measurable factors that can be extracted from the data and the 46 contained in the MCP study are those that could be extracted from datafiles and "reflected the performance of these different components," Liebler said.

[ pagebreak ]

The metrics, he said, "are kind of like taking a car into the shop and plugging it into the computer and it tells you the timing's off, or the catalytic converter's malfunctioning, or one of the temperature sensor's malfunctioning."

They are particularly useful in "modest degradations" in platform performance when the causes may not be obvious and "you might not be sure you should use that data or not and you just keep going," Liebler said. The metrics would help a researcher diagnose moderate losses in system performance, "which are much harder to figure out."

In addition to serving as a troubleshooting manual, the metrics also provide for the first time a comprehensive quality control profiler for LC-MS systems, which would benefit biomarker studies, he added.

"If you're going to analyze a set of cancers and a set of normals and you see differences in the data and you say, 'These are candidate biomarkers we're going to follow up on,’ how can you convince the research community that those differences are due to biology and not due to the instability of the analytical platform?" Liebler said. "And the answer, in our opinion, is that performance metrics can unambiguously document how the system is performing during the analysis."

Gene-expression profiling and high-throughput sequencing have similar QC samples that are analyzed to document system performance, but that is something that has been missing in proteomics, he said.

"So the metrics could lead us to an objectively driven automated controller process monitoring what the field needs if the data are going to be evaluated as to quality," Liebler said.

Research leading to the yeast standard and the application of the 46 metrics by the CPTAC researchers was done on Thermo Fisher Scientific's LTQ, LTQ-XL, LTQ Orbitrap, and LTQ-XL Orbitrap because all of the participating labs had those instruments and, said Liebler, they represent the dominant platforms in LC-MS/MS proteomics research.

The yeast standard and 46 metrics apply to any mass spec from any vendor, however, he said.

The certified yeast reference material is under development at NIST and will become available in 2010. For now, aliquots of the yeast standard are available through the institute. Requests can be made at [email protected].

In addition to the yeast standard and the metrics, Liebler and his colleagues have submitted a manuscript focused on improving repeatability and reproducibility of proteomics experiments. In the remaining two years of the $35.5 million CPTAC initiative, the researchers will compare biomarker discovery platforms in real tumor tissue samples. A study has just begun and is expected to last a year and a half.

It will encompass systems that measure different "things about the proteome. We're interested in comparing … protein inventory like we did in these two [current] studies, but also phosphoproteomics, glycoproteomes, and so forth, analyzed in different ways.

I think the longer-term question is: How much of the proteomic universe of cancer is detectable by current mass-spec based proteomics platforms … and what's the overlap between the different parts of the proteomic universe that's seen by doing phosphoproteomics vs. non-modified protein inventories vs. glycoproteomics, and so forth?" Liebler said.

The Scan

Billions for Antivirals

The US is putting $3.2 billion toward a program to develop antivirals to treat COVID-19 in its early stages, the Wall Street Journal reports.

NFT of the Web

Tim Berners-Lee, who developed the World Wide Web, is auctioning its original source code as a non-fungible token, Reuters reports.

23andMe on the Nasdaq

23andMe's shares rose more than 20 percent following its merger with a special purpose acquisition company, as GenomeWeb has reported.

Science Papers Present GWAS of Brain Structure, System for Controlled Gene Transfer

In Science this week: genome-wide association study ties variants to white matter stricture in the brain, and more.