Skip to main content
Premium Trial:

Request an Annual Quote

ABRF sPRG Study Finds Field is Precise but Still Struggles with Accuracy in Quantitative Proteomics

Premium

Albuquerque – When it comes to protein quantitation, the proteomics community is very precise, but not all that accurate.

That was the takeaway from the Association of Biomolecular Resource Facilities' Proteomics Standards Research Group (sPRG) 2013 study, the results of which the organization presented this week at the ABRF annual meeting in Albuquerque.

The 2013 sPRG study tasked participants with relative quantitation of up to 1,000 heavy/light peptide pairs in a single sample. Labs were given mixtures of a stable isotope labeled synthetic peptide standard combined with a tryptic digest of HEK293 cells and were asked to analyze the sample in triplicate using the mass spec instrument and workflows of their choice.

The results indicated that while participants could achieve precise intra-laboratory measurements of the majority of the proteins across runs, achieving accurate quantitation was more difficult.

"Participants results were very precise, they can identify virtually all these standards in the mixture and their ability to reproducibly measure them seems very good," Chris Colangelo, director of the protein profiling resource at Yale University and chair of the study, told ProteoMonitor. "Their ability to accurately measure them, at least from the initial results, seems to have a higher variance."

Based upon their initial analysis of the results, there was a group of roughly 100 peptides that the participants were able to quantify with good accuracy. The participants quantified another roughly 300 peptides with some degree of agreement between them. And for the peptides beyond that, the values obtained by the participating labs varied widely.

The study leaders had not had time yet to determine what, if anything, was common to the roughly 100 peptides the participants had success measuring, Colangelo said, noting that they would seek to identify any such factors in the next stage of data analysis.

One probable contributing factor, he said, was the fact that some of the native peptides were likely present in the HEK lysate only at very low levels. "So while these peptides are present, they are in such low concentrations that we wouldn't expect to see them," he said.

This, Colangelo said, is one of the benefits of having established a peptide standard mix that can now be used for future research. The standard developed for the sPRG study is now commercially available from JPT Peptide Technologies.

"As instruments become more sensitive, people are going to be able to quantitate all 1,000," he said. "In five years, we'll be able to run this sample and say, 'Look at how far we've come. Five years ago we could really do only 100 peptides well with 20 percent error, and now we're doing 700 well at 20 percent error.' It has growth and legs."

As for what the study indicates about current quantitative proteomics experiments, Colangelo suggested that despite the lack of agreement between labs, the precision of the measurements means such efforts can still be effective at identifying differential expression of proteins between samples.

If two labs are precise in their measurements and consistently see the same differences, then, he said, this relative quantitation data is useful even if the absolute values are slightly different from each other.

"[Accuracy] matters when we go to the clinic and we say, 'You have to have 50 femtomoles of [a protein] for a diagnosis,' but we're not doing that here," he said. "So I think that's where we're at – we're extremely precise, but our accuracy can be improved."

He compared the study to the first phase of the National Cancer Institute's Clinical Proteomic Technology Assessment for Cancer initiative, which ended in 2011.

"If you go back to the CPTAC study, they showed the same interlaboratory error rates, so I don't think we've shown anything different," he said. However, as his fellow sPRG committee member Brian Searle, principal scientist at Proteome Software, noted, the fact that the CPTAC groups were working under defined protocols while the sPRG participants were essentially left to their own devices, does suggest that the field has advanced over the last few years.

The study participants used a wide variety of technologies, ranging from conventional data-dependent acquisition mass spec to data-independent techniques, like Swath, to parallel-reaction monitoring and multiple-reaction monitoring. At least according to the initial analysis of the results, no one method or instrument performed better than another.

This might seem surprising, particularly in the case of MRM, which is generally considered the gold standard for mass spec-based protein quantitation. However, Searle said, it's not entirely unexpected given the challenge of generating MRM assays to such a large collection of peptides.

"You have to think about the complexity of this experiment," he said. "One thousand peptides means you have to be doing maybe 6,000 to 8,000 transitions to accurately quantify them all."

Instead, Colangelo said that the few groups that did choose to analyze the study using MRM typically chose a smaller subset of peptides to concentrate on, which might have affected their results. "One group just chose the first 60 peptides," he said. "If they had chosen a different 60, their comparison might have been different."

Additionally, he said, given the small number of groups who chose to use MRM, it's difficult to make any broad conclusions. "There [are] not enough Ns to really say anything."

In all, 90 labs signed up to participate in the study and 40 returned results. Given the participants choice of workflows, the efforts provide some insight into the most commonly used instrumentation and software for such experiments.

Thermo Fisher Scientific was by far the most popular vendor, with 29 of the 40 users using a Thermo Fisher instrument, including 10 Q Exactives, eight Orbitrap Velos, and five Orbitrap Elites. AB Sciex was the next most popular vendor, with 10 instruments including eight TripleTOF 5600s. One Bruker Impact mass spec was also used.

Among software packages, the University of Washington's Skyline program was most popular, with 34 participants using it. Three labs used the Max Planck Institute's MaxQuant, two used Thermo Fisher's Proteome Discoverer, and one used Nonlinear Dynamics' (now owned by Waters) Progenesis product.

The Scan

Review of Approval Process

Stat News reports the Department for Health and Human Services' Office of the Inspector General is to investigate FDA's approval of Biogen's Alzheimer's disease drug.

Not Quite Right

A new analysis has found hundreds of studies with incorrect nucleotide sequences reported in their methods, according to Nature News.

CRISPR and mRNA Together

Time magazine reports on the use of mRNA to deliver CRISPR machinery.

Nature Papers Present Smartphone Platform for DNA Diagnosis of Malaria, Mouse Lines for Epigenomic Editing

In Nature this week: a low-cost tool to detect infectious diseases like malaria, and more.