NEW YORK (GenomeWeb) – While data-independent analysis mass spec has seen significant uptake by researchers in recent years, it has been little used to date for the study of post-translational modifications like phosphorylation.
Improvements in instrumentation and software could change that, said Jesper Olsen, professor at the University of Copenhagen's Novo Nordisk Foundation Center for Protein Research and first author of a recent study in the Journal of Proteome Research that evaluated the performance of Thermo Fisher Scientific's Q Exactive HF-X mass spectrometer for applications including phosphoproteomic analyses.
Thermo Fisher launched the instrument at this year's American Society for Mass Spectrometry annual meeting, where Olsen, an early-access user, presented findings demonstrating that, among other things, it could achieve the same proteome coverage as the previous version of the Q Exactive using half the separation time or with 10-fold less sample.
He and his colleagues also found that they could reproducibly quantify around 9,000 proteins using data-dependent acquisition combined with extensive offline fractionation and around 6,000 proteins using a 30-minute single-shot DIA workflow.
It is this latter workflow that Olsen has been exploring heavily in the months since the ASMS meeting. While he did essentially no DIA analysis before working with the Q Exactive HF-X, he said the higher performance of this newer instrument has made it, to his mind, a very interesting and potentially powerful approach.
"It's fast enough that we can go through the whole mass range in a relatively short period of time," he said. "And we see that we can cover an order of magnitude more in dynamic range than in DDA mode [also run on a short gradient without fractionation.]"
Using a single-shot DDA approach the researchers are able to reproducibly measure around 4,000 proteins in a half-hour experiment. "But we can go beyond 6,000 using DIA mode," Olsen said. "And we're reaching some of the low-abundance [proteins] that we want to cover."
Based on that success, Olsen said his lab is now trying to extend its DIA analyses to phosphoproteomics work.
In conventional DDA experiments, the mass spec performs an initial scan of precursor ions entering the instrument and selects a sampling of those ions for fragmentation and generation of MS/MS spectra. However, because instruments can't scan quickly enough to acquire all the precursors entering at a given moment, many ions — particularly low-abundance ions — are never selected for MS/MS fragmentation and so are not detected. This means that different peptides are selected for measurement across different runs, which can make it difficult to reproducibly quantify the same proteins across large sets of samples.
DIA mass spec, on the other hand, selects broad m/s windows and fragments all precursors in that window, which allows the instrument to collect MS/MS spectra on all the ions in a sample. Use of broad m/z windows, however, presents a challenge for DIA analysis in that they result in very complicated spectra with considerable noise as the precursors captured in these windows interfere with one another. This means that, though DIA offers more reproducible quantification, it typically measures less of the proteome than a DDA experiment, though Olsen's work indicates that DIA can outperform DDA in single-shot experiments using short LC gradients.
The complexity of DIA spectra also creates challenges for using the approach for detection of protein post-translational modifications.
There are only a few studies that have really dealt with the PTM analysis in DIA mode," Olsen said. "DIA is still kind of in its infancy, so I guess it needed to develop for a normal applications first before its [application] to post-translation modifications. But that's really what we're trying to do now, because it looks really good on this new instrument with faster scanning speed and the higher sensitivity."
Key to such analyses is generating a high-quality spectral library and having very reproducible chromatography, Olsen said.
"We are not only faced with identifying the peptide sequence, but we also want to be able to localize the modification site within that peptide sequence," he said. "So we have to rely on these high-quality spectral libraries that we generate and with our normal [extensively fractionated] DDA workflow. We also have very accurate retention time alignment between the runs."
Software is also an area where further development is needed, Olsen said. This year, a team led by Swiss Federal Institute of Technology (ETH) Zurich professor Ruedi Aebersold published a study in Nature Biotechnology on an algorithm for identification and quantification of protein post-translational modifications in Swath-style data-independent acquisition data sets.
That followed the release in 2016 by researchers at the Institute for Systems Biology of their SwathProphetPTM algorithm, which similarly brought PTM analysis tools to DIA data.
Olsen said his lab is working with proteomic company Biognosys, a spinout from Aebersold's lab, to incorporate tools for PTM analysis into its Spectronaut DIA software, which Olsen's lab currently uses for its regular DIA work.
He said that his lab's DDA experiments are able to reproducibly measure around 5,000 phosphopeptides in a 15-minute mass spec run, while they can measure around 20,000 phosphopeptides in 15 minutes using DIA.
"That's 20,000 phosphopeptides we can get in a single shot with no fractionation and a very short gradient," he said, noting that this enabled the kind of high throughput required for experiments looking at, for instance, the effect of different conditions and drugs on cell signaling.
"We have mainly used [the approach] to study receptor tyrosine kinase signaling where we can activate cells with different growth factors and simultaneously look at specific downstream signaling pathways," Olsen said. "So we have to do everything at different time points, with different concentrations of the drugs, and when we have a whole panel of drugs that we want to go through you can do the math and see why that would basically not be possible to do if we had to do it the traditional way where we fractionate every single sample and spend maybe up to a day analyze each of them."
He added that, because they don't fractionate their samples, they need less material to start with, making the workflow plausible for analysis of actual clinical samples, as well. He said they have recently begun to test the effectiveness of the approach with such samples.