NEW YORK (GenomeWeb) – Cutting-edge research labs have in many instances the luxury of a narrow focus, which in the case of proteomics, for instance, allows them to refine and optimize their instrumentation to tackle specific questions over extended periods of time.
This is not the case for core labs. On the contrary, these institutions typically run a variety of samples for a variety of groups looking at a variety of questions. That being the case, qualities like versatility and robustness must be balanced against raw performance.
With this in mind, researchers at the Functional Genomics Center Zurich recently published a study in the journal Proteomics looking at the performance of several targeted proteomics methods and their performance in a core lab environment.
The study compared selected-reaction monitoring mass spec, parallel-reaction monitoring mass spec, and data-independent acquisition mass spec across five different platforms, looking at SRM on an AB Sciex QTRAP 5500 and a Thermo Fisher Scientific TAQ Vantage; PRM on a Thermo Scientific Q Exactive; and DIA on an AB Sciex TripleTOF 5600 and a Thermo Scientific Q Exactive HF.
The researchers tested these methods and platforms for measuring a pre-digested protein sample consisting of six trypsin digested human proteins and 14 corresponding stable isotope labeled peptides in a yeast matrix, looking at both their quantitative accuracy and precision at constant concentrations and then at a concentration range decreasing over three orders of magnitude.
Broadly speaking, the results were as expected, said Christian Trachsel, an FGCZ researcher and author on the study. "There was not much surprise in the data itself," he told GenomeWeb. "We took a simple sample, made dilution series, and got more or less the results that are accepted in the community — that SRM offers the highest sensitivity and accuracy, while DIA is good if you have a survey and want to quantify a lot of targets."
More surprising, and reassuring, Trachsel noted, was the high reproducibility of the various methods, with all workflows scoring median peptide-level coefficients of variation of below 20 percent.
"We took many different platforms, many different [LC] gradients, many different workflows, many different users, across many different days and didn't optimize everything too much, and still it was very reproducible across all the platforms," he said.
"The study was recorded over one and a half years, and one of the lab's employees even left during this project," Trachsel added. "So, it is a dataset that evolved over time which we have analyzed in the end, and it was astonishing but also reassuring to us how stable the workflows are."
DIA on the TripleTOF 5600 showed particularly high precision, with a median peptide-level CV of below 5 percent. DIA on the Q Exactive HF, meanwhile, was the least precise of the workflows, with a median CV of 15 percent.
At the initial concentration levels used, all forms of analysis — SRM, PRM, and DIA — provided similar accuracy, though DIA on the TripleTOF 5600 had measurably lower accuracy than the other four workflows. Moving to lower concentration samples, though, the researchers found that SRM and PRM outperformed DIA, which, as Trachsel noted, was as expected. DIA, however, remains better suited for experiments monitoring large numbers of analytes, the authors noted.
The study did not specifically aim to compare SRM and PRM methods of quantitation, but Trachsel said that based on his work with the two methods, they should perhaps not be considered competitors so much as complements.
PRM is essentially a variety of data-independent mass spec in which the mass spectrometer, rather than analyzing the full range of a sample, is trained on a more targeted mass and time window. Compared to conventional SRM assays, the approach has various potential advantages.
For instance, because their analyzers are able to collect data on a wide range of ions, high-resolution machines could allow for easier assay development and better specificity. In a triple quad-based SRM assay, the first quadrupole isolates a target precursor ion, which is then fragmented in the second quadrupole, after which a set of preselected product ions are detected in the third quadrupole. By contrast, PRM approaches use the upfront quadrupole of a Q-TOF or Q Exactive machine to isolate a target precursor ion, but then monitor not just a few but all of the resulting product ions.
The larger number of product ions monitored via PRM should improve the specificity of the analysis, since more transitions will be available to confirm a peptide ID. The instrument's high resolution can also reduce the effects of co-isolating background peptides. Additionally, because researchers don't have to determine upfront what the best transitions to monitor will be, assay development time could be easier.
In fact, Trachsel said that assay development is easier with PRM. "You have more flexibility," he said. "You can see better which transitions are contaminated or not, which are good fliers or bad fliers."
However, he added, once the assay has been developed, transferring it to SRM on a triple quad has certain advantages.
"If you have a large sample cohort, I would still go for SRM because the technology in my eyes is more robust," he said. Additionally, PRM outputs significantly more data, which can require more storage space and longer processing times.
The study also looked at the effects of manual versus automated peak integration, comparing results from three expert researchers, three novices, and two automated mProphet models. For SRM data, the expert and novice researchers showed essentially equivalent performance, both representing a slight improvement over the automated models. For DIA data, on the other hand, expert users did significantly better than beginners, and automated peak picking performed similarly to the experts.
This large variance between users, as well as the fact that automated methods were essentially as good as expert researchers for peak picking was very surprising, FGCZ researcher Christian Panse, also an author on the study, told GenomeWeb.
"You cannot do really something wrong if you use the automatic methods." he said. "That is the take-home message."