Skip to main content
Premium Trial:

Request an Annual Quote

CPTAC Awardees Tackling Reproducibility, Quantification to Resolve Bottlenecks

Premium
The five laboratories awarded $35.5 million in late 2006 by the National Cancer Institutes to explore proteomics technologies for cancer research are concentrating on reproducibility issues, the ability to quantify proteins, and how different tissue and fluid matrices may affect discovery and validation, according to an NCI update earlier this month.
 
At a meeting, which also included remarks by two of the five recipients sharing the $35.5 million, five-year grant, the NCI updated the National Cancer Advisory Board on its Clinical Proteomic Technologies Initiative for Cancer, a five-year $104 million initiative created by the NCI in 2005 to explore how proteomics can be applied to cancer research.
 
CPTI is made up of three parts: the Clinical Proteomic Technology Assessment for Cancer; the Advanced Proteomic Platforms and Computational Sciences awards; and the Clinical Proteomic Reagents Resource [See PM 09/28/06].
 
At the meeting, the two labs provided an overview of what CPTAC set out to do and where it now stands. While the five CPTAC grantees and their collaborators are conducting their own projects in connection to the award, they are also working within a common framework to address big-picture questions that have slowed down progress in proteomics research.
 
According to Steven Carr, who is directing work at the Broad Institute of the Massachusetts Institute of Technology and Harvard University — one of the five CPTAC grant recipients — the groups are looking at five overarching issues: the representation of proteins presents in samples that are detected at each decade — i.e. 1 microgram per milliliter, 100 micrograms per milliliter —  in an unbiased discovery experiment, or the number of proteins that are detected; the reproducibility of various discovery platforms for the detection of true differences between samples, i.e., false discovery rates; the reproducibility, accuracy, and sensitivity of various validation platforms; whether discovery and validation platforms require different measurement endpoints and/or different specifications on the endpoints; and the impact of matrix complexity in discovery and verification.
 
Two years ago when the CPTAC awards were announced, there was no shortage of proteomics work being done, Carr said, but the field was plagued by issues that rendered many results questionable.
 
Sensitivity was a major issue, he said, as only the most abundant proteins were being detected. Specificity was another problem as patterns being seen were not always indicative of disease.
 
Many studies, he said, were simply poorly designed.
 
Even the choice of samples used in experiments created issues. While blood is often the fluid of choice among researchers because of its availability and easy access, it also presents major challenges. Blood has a dynamic range of 11 orders of magnitude while mass spectrometers have a dynamic range no greater than three orders of magnitude. As a result, only about 1,000 proteins can be identified with high confidence, the majority of them high-abundance proteins.
 

“Not surprisingly, we came out looking sort of like the Bad News Bears.”

“That begs the question, ‘Is blood really the best place to be doing a proteomics exercise in?’” Carr said. “Obviously it is the fluid of choice in your ultimate clinical test via a proteomics method or by a clinical assay, but as a discovery fluid it actually represents the most complex proteome that you could work with.” 
 
While methods of validating biomarkers were well established by 2006, they also proved to be very slow and expensive, and so these methods were reserved for only the most promising biomarkers.
 
Finally, all the ad hoc data-analysis methods that spotted the field proved to be “a recipe for disaster,” Carr said. “Every different algorithm that you use to process the data will give you a slightly different answer, so the ability to compare apples to apples was just not there,” Carr said.
 
To address some of these and other issues, the CPTAC grantees met in October 2006 soon after they had been named winners of the grants to develop a strategy. Among the things they decided were necessary were common sample collection methods to ensure high-quality samples; the analysis of mass-spec data through common pipelines using common databases with defined criteria; and the development of technologies to bridge the discovery of candidate biomarkers in tissue to quantitative assays in blood.
 
Eventually, CPTAC created eight working groups to tackle those issues. At the meeting this month, Daniel Liebler, who is heading the CPTAC team at Vanderbilt University, reviewed the work of two groups, the Unbiased Discovery group and the Targeted Verification group.
 
The former has so far completed five studies to assess different technologies, with a sixth study being done this month. For its first study, done in November 2006, members of the working group were asked to use whatever platforms they chose to analyze a 20-protein standard mixture developed by the National Institute of Standards and Technology. No standard operating procedures were prescribed.
 
What resulted were highly variable detection results from each laboratory, and even variability in different samples within one lab, Liebler said.
 
“Not surprisingly, we came out looking sort of like the Bad News Bears,” he said, referring to the 1976 film about a bumbling Little League baseball team. 
 
In follow-studies, the working group implemented an SOP that they modified and refined along the way. They also used an ion trap LC-MS platform as a standard technology because it is the “dominant instrument used for shotgun proteomics,” Liebler said, and replaced the NIST-20 human protein mix with the yeast proteome.
 
As they did so, their results showed greater accuracy and reduced variability, Liebler said. When they do their next follow-up study, the working group will look at the yeast proteome spiked with a 48-human-protein mix, acquired from Sigma-Aldrich and using a finalized SOP.
 
Eventually, the group will move away from the yeast model to cell models that represent “some type of relevant phenotypes that [are] representable in a cancer context,” Liebler said.
 
The Targeted Verification group has not been as active as the discovery working group, he said. Goals include creating a performance standard for the LC-MS/MS-MRM system — in this case, involving a standard human plasma sample that could be spiked with seven human proteins, which the group is currently working on; developing SOPs; and generating a dataset to develop metrics of accuracy and precision, Liebler said.
 
In addition to the work being done by the five CPTAC groups, the Clinical Proteomic Reagents Resource is creating an antibody-characterization laboratory at NCI-Frederick in Maryland [See PM 11/29/07]. During the meeting this month, Henry Rodriguez, director of CPTI, said that the companies that will be awarded contracts to make antibodies as part of that initiative will be announced later this month.

The Scan

Renewed Gain-of-Function Worries

The New York Times writes that the pandemic is renewing concerns about gain-of-function research.

Who's Getting the Patents?

A trio of researchers has analyzed gender trends in biomedical patents issued between 1976 and 2010 in the US, New Scientist reports.

Other Uses

CBS Sunday Morning looks at how mRNA vaccine technology could be applied beyond SARS-CoV-2.

PLOS Papers Present Analysis of Cervicovaginal Microbiome, Glycosylation in Model Archaea, More

In PLOS this week: functional potential of the cervicovaginal microbiome, glycosylation patterns in model archaea, and more.