Skip to main content
Premium Trial:

Request an Annual Quote

Improving Software, Instrumentation Boosting All Flavors of Protein Quantitation

Premium

NEW YORK (GenomeWeb) – This month's American Society for Mass Spectrometry annual meeting was a relatively quiet one for industry news in proteomics.

That said, vendors continued to push the technologies forward in a number of areas, one of the more notable being quantitative experiments where companies are exploring a variety of approaches to reliably measure levels of thousands of proteins across large sample sets.

Broadly speaking, quantitative proteomic workflows use either a data-dependent acquisition (DDA) approach or a data-independent acquisition (DIA) approach. In the former, the mass spec performs an initial scan of precursor ions entering the instrument and selects a sampling of those ions for fragmentation and generation of MS/MS spectra. However, because instruments can't scan quickly enough to acquire all the precursors entering at a given moment, many ions — particularly low-abundance ions — are never selected for MS/MS fragmentation and so are not detected.

This means that different peptides are selected for measurement across different runs, which can make it difficult to reproducibly quantify the same proteins across large sets of samples—an essential part of biomarker discovery and validation work.

DIA mass spec, on the other hand, selects broad m/s windows and fragments all precursors in that window, which allows the instrument to collect MS/MS spectra on all the ions in a sample. Use of broad m/z windows, however, presents a challenge for DIA analysis in that they result in very complicated spectra with considerable noise as the precursors captured in these windows interfere with one another. This has meant that though DIA offers more reproducible quantification, it typically measures less of the proteome than a DDA experiment.

As instrumentation and software improve, however, both approaches draw closer to the goal of enabling reliable proteome-wide quantification across large sample sets, raising the question of whether the two methods might ultimately converge, at least in terms of performance if not in their underlying methodologies.

All the major proteomics mass spec vendors offer both DIA and DDA methods, but Thermo Fisher Scientific is perhaps most prominent in promoting its instruments — and its Q Exactive line, in particular — for use in both types of workflows. At ASMS the company had news both on the DIA side of things, where it announced a new co-marketing agreement with Swiss targeted proteomics firm Biognosys, and the DDA side, where it announced its new DDA-plus workflow, which it said would significantly reduce missing values in DDA data sets.

In both DIA and DDA, customer demands are placing a premium on efficient, streamlined approaches to analyzing samples in "a robust, routine, standardized, reproducible way," said Ken Miller, Thermo Fisher's vice president, omics marketing, chromatography, and mass spectrometry. This has become especially true as proteomics has moved further into the translational and clinical space, where the ability to reproducibly quantify proteins across thousands of samples is key.

As for which approach will prove best suited to this goal, Miller suggested that is for the company's customers to decide.

"We have some group of users who have been doing DDA-based analysis for a long time, and so we've worked to make our DDA method as robust and reproducible as possible and to minimize some of the downsides. Hence the new DDA-Plus workflow," he said. "Then I've got other groups of customers who don't want to use DDA, they want to use DIA. So, I feel like we also need to have a robust DIA workflow."

Additionally, he said, there is a set of customers who prefer to use the company's isobaric tagging TMT workflow, which is a DDA-based labeling approach that allows for multiplexing of up to 10 samples in a single run.

"The conclusion I've come to is that if we support these three workflows, we satisfy the needs of 90-plus percent of customers doing this kind of work," Miller said.

As for what approach is ultimately the best for what purpose, he suggested that in his view it is largely a matter of each researchers' personal preference. "It almost comes down to a philosophical difference," he said.

Jesper Olsen, a professor at the University of Copenhagen and an early-access user of Thermo Fisher's new Q Exactive HF-X mass spec, provided some insight into how his lab thinks about when to use which approach. Olsen said that his lab does not use the company's DDA-Plus method but rather its own software that similarly uses accurate mass and retention time data to fill in missing values across samples. He said that using his lab's DDA workflow on the Q Exactive HF-X, he and his colleagues are able to reliable quantify across different samples around 9,000 proteins.

"These are numbers we could only dream about approaching just a few years ago," he said, noting how DDA workflows and mass spec instrumentation have advanced over that time.

Using a DIA workflow on the Q Exactive HF-X, Olsen said his lab is able to quantify around 6,000 proteins. These aren't apples-to-apples comparisons, however. The lab's DDA workflow uses extensive offline fractionation (dividing, for instance, HeLa cell samples into 46 fractions) followed by a total of around 12 hours of analysis time.

Its DIA workflow, on the other hand, quantifies the aforementioned 6,000 or so proteins using no offline fractionation and a half-hour analysis. Using DDA under similar conditions, Olsen and his colleagues typically quantify around 4,000 proteins.

He suggested that continued technological improvements will ultimately close this gap.

"In the near future, we'll probably have instruments that can scan so fast that DDA, in principle, becomes DIA," he said. "I mean that you will be able to fragment everything, every single mass unit throughout a single scan cycle."

That said, "We are not there yet," Olsen noted. "There is definitely still a significant advantage in DIA-type acquisition for single-shot runs."

Most DIA algorithms identify peptides by matching them to a spectral library that is generated by an initial DDA run, and while researchers are developing spectral libraries for commonly analyzed sample types than can be used in a more or less "off the shelf" manner, Olsen said he and his colleagues have found they achieve the best results when using libraries generated on the specific samples they are studying via extensive fractionation followed by DDA analysis.

"We can see that it really helps to have spectral libraries that are basically acquired on similar types of samples with the same [LC] gradients and on the same instruments," he said. "We would typically still try to build up an experiment-specific, DDA-based spectral library for the DIA runs that we would subsequently do. If, for example, we had a hundred tumors that we need to profile, we would first take a couple of them and maybe do very extensive fractionation and do a DDA run to build up a very high quality library that we could then use all the DIA runs that we would subsequently do."

Contrasting somewhat with Thermo Fisher's ecumenical approach, Sciex has made the DIA advantage Olsen spoke of the centerpiece of its recent proteomics strategy. The company offers a full range of quantitative workflows along with its own isobaric labeling reagents, but it most heavily promotes its Swath DIA method, which significantly boosted commercial interest in DIA mass spec when it launched in 2011.

Swath is the cornerstone of what Sciex has called its "industrial proteomics" approach, in which the company aims to develop workflows allowing researchers to collect quantitative data on tens of thousands of samples in a streamlined and reproducible manner.

"DDA isn't going away," said Mark Cafazzo, director of Sciex's academic/omics business. "We still have plenty of customers doing [DDA or isobaric labeling] techniques there where they just want to have a small study and they want to do a quick comparison and look for proteins that have been up or down regulated."

"But the demand of precision medicine is to get a better understanding of the molecular mechanisms underneath the disease you're studying, and to do that, you need larger studies. You need more replicates. You need more samples," he said. "And if you're going to be making quantitative conclusions across these large biological studies where the samples take a long time to run and there's a ton of variability in both the subjects and the sample prep, you need an analytical technique that is exceptionally reproducible and as comprehensive as possible in terms of the coverage."

One of the showcases for Sciex's Swath-based efforts is the Australian Cancer Research Foundation International Centre for the Proteome of Cancer (ProCan), which opened last year and aims to use Swath mass spec to generate proteomic profiles of roughly 70,000 cancer tumor samples over the next seven years.

Another site where Sciex is helping to implement a similar idea is the University of Manchester's Stoller Biomarker Discovery Centre, which also opened last year with a suite of Sciex mass spectrometers and similarly aims to do Swath-based protein biomarker discovery on the scale of thousands to tens of thousands of patient samples.

Though the first mass spec firm to offer a commercial DIA method, Waters has promoted the approach somewhat less than Sciex and Thermo Fisher. It did highlight at ASMS the ongoing development of its SONAR DIA mass spec method, which it initially released last September at the Human Proteome Organization's annual meeting. The method, which runs on its Xevo G2-XS QTOF, uses the quadrupole to actively scan the large m/z windows used in DIA techniques, which adds additional separation of precursors and improves assay performance.

Speaking at the meeting, Gary Harland, Waters' senior director of product management for mass spectrometry, said that company researchers had found they were able to identify more than 35 percent more protein groups using the SONAR method compared to a traditional Swath approach. However, he did not provide specifics regarding the setups of the two experiments.

Michael MacCoss, associate professor at the University of Washington, whose labs specializes in development of targeted proteomics software including for DIA methods, said during ASMS that while advances in instrumentation may ultimately enable DDA methods to measure samples as comprehensively as DIA, he believes that "the faster these instruments get, the rationale for doing DDA becomes less and less [strong]."

Essentially, this is because the same jumps in speed that would improve DDA workflows would also boost the performance of DIA workflows, allowing, for instance, the use of narrower isolation windows, which would ease the challenge of deconvoluting DIA spectra.

At the same time, DIA would retain certain advantages over typical DDA workflows, MacCoss said, noting, for instance, that the MS2-level quantitation used in DIA is more sensitive and has less interference than the MS1-level data used in DDA-based label-free quantitation.

He suggested that one reason some researchers still prefer DDA methods is that DDA data analysis tools are more straightforward.

"I think the data collection by DIA is great," he said. "It's robust. It's easy to do. But the data analysis still hasn't reached a level of maturity."

In a recent interview on DIA workflows, Emily Chen director of the shared proteomics resource at Columbia University Medical Center's Herbert Irving Comprehensive Cancer Center, suggested that this was particularly an issue for Thermo Fisher's DIA offerings.

The company's recent deal with Biognosys, which specializes in DIA data analysis software, can be seen as part of an effort to fill this gap.

MacCoss noted, as well, that having stable, reproducible chromatography was also a significant challenge for researchers using DIA.

"If you're trying to quantify something from data-independent data, the quality of your chromatography becomes very important, and that's still a challenge among a large number of samples for people including our lab," he said. "I think now LC from complex samples is one of the biggest challenges that we still have to overcome."

Perhaps in recognition of this, vendors have in recent years begun offering microflow LC, which is more robust and reproducible than nanoflow, but still provides high sensitivity. Thermo Fisher introduced at ASMS its UltiMate 3000 RSLCnano system, a new LC system that can be used for nanoflow and microflow.

More generally, MacCoss said he sees the field shifting from an emphasis on measuring as many proteins as possible in a given sample to a focus on quantifying somewhat smaller sets of proteins but in a more reproducible manner.

"I think that is a major theme that's come out of this year's ASMS," he said. "People are very interested in reproducibility and quantitative analysis. I'm impressed at how many tools developed for quality control and system suitability are being presented [at the conference]. I think that's a good, healthy sign that our field is less worried about the numbers and more worried about the quality of their data. I think that's a very positive thing."