Skip to main content
Premium Trial:

Request an Annual Quote

More Data, More Problems: Qual-Quan Mass Spec Presents New Software Challenges

Premium

By Adam Bonislawski

The introduction of AB Sciex's TripleTOF 5600 mass spectrometer last week at the American Society of Mass Spectrometry annual conference pushed the concept of qual-quan technology to the fore as the major mass spec vendors highlighted the ability of certain of their instruments to offer both the qualitative capabilities of a high resolution, accurate-mass system and the quantitative sensitivity of a triple-quadrupole machine (PM 5/28/2010).

These emerging qual-quan systems bring with them new challenges, however. In particular, running simultaneous qual-quan workflows generates significantly more data – up to five times as much – than many conventional mass spec assays, creating the need for vendors to ensure that their software offerings keep pace with the expanded capabilities of their hardware.

"Certainly we recognize that accompanying great hardware there also has to be great software," Dominic Gostick, director of the biomarker mass spectrometry business at AB Sciex, told ProteoMonitor. "The raw data you're getting [from the TripleTOF 5600] is far more complex, far more information rich. And so it really falls to us to deliver solutions that can take that information and process it into something that's usable to the customer."

Gostick cited the company's MultiQuant 2.0 quantitation application as a piece of software designed with the TripleTOF 5600 in mind.

"We've effectively rewritten our quantitation software and delivered new quantitation software that's capable of handling traditional triple-quad MRM as well as the new high resolution data, full scan MS, and MS/MS data from the 5600," he said.

He also noted the development of a plug-in module to the company's proteomics PeakView software that allows researchers to pull out quantitative information on an identified protein with the click of a single button. The plug-in was developed in collaboration with Lorne Taylor, director of the Ontario Proteomics Methods Centre at the Samuel Lunenfeld Research Institute of Mount Sinai Hospital, who worked with the TripleTOF 5600 in advance of its release.

"That was a significant innovation that really enabled Lorne to do qual-quan on a single platform in a single experiment," Gostick said.

Perhaps as important as the PeakView plug-in's functionality is its speed, Taylor told ProteoMonitor.

"The most tremendous thing is the speed at which it works," he said. "Though it's just a change in speed it almost qualitatively changes the structure of research. It becomes more of an integrative, real-time tool for investigators, rather than a batch system."

Presently, Taylor said, he and AB Sciex are collaborating on software to create equally streamlined workflows for MS/MS quantification.

"Software is absolutely key," Iain Mylchreest, vice president and general manager of Life Sciences Mass Spectrometry at Thermo Fisher, told ProteoMonitor. "The experiments that are being performed now are both quantitative and qualitative. How do you tie that information back together and make it biologically meaningful? The whole informatics side is going to be very important."

Mark Turner, CEO of bioinformatics firm Proteome Software, recalled similar software concerns surrounding the launch of Thermo Fisher's LTQ Velos machine last year.

[ pagebreak ]

"We just spent a long time coming out with a new version of our software [Scaffold] to take into account the larger data files and more spectra being generated by mass spectrometers," he told ProteoMonitor. "It's an ongoing process."

Thermo Fisher, Mylchreest said, has "a very complete suite of software to support both the qualitative side of proteomics and the quantitative side of proteomics."

Increasing trends toward top-down proteomics and intact protein analysis could outstrip present software capabilities, however.

"People are going to want to look at proteins and glycosylations and how things are folded and where the disulfide bonds are," he said. "The tools are already in place from a mass spectroscopy standpoint. The software is going to have to be developed to support those types of experiments. Most of it is in place, but there are some gaps that need to be closed."

James Atwood, CEO of Athens, Ga.-based bioinformatics firm BioInquire, told ProteoMonitor that mass spec vendors have developed solid software for managing the workflow of their machines but have spent less time on developing software to manage the large amounts of data the new qual-quan platforms will produce.

"The manufacturers are coming out with instruments that will produce more data, but they seem to be focused more on software for workflow purposes than for actually managing that data," he said. "They have automated software pipelines for taking the raw mass spectra and converting it to peptide and protein IDs, but the issue that still needs to be solved is how do you effectively merge and compare six months of mass spec runs?"

"They'll help you produce more, but they don't really give you a solution on the backend for taking 1,000 runs, 2,000 runs and combining them together in one dataset," he said.

"The mass spec vendors have very bare bones acquisition software generally," Taylor agreed. "That being said, they all realize this, and if you look at the software groups of all the major mass spec vendors, they're beefing up and trying to figure out how to wrestle this in."

"We're always driven very much by what our customers require," Gostick said, citing AB Sciex's MarkerView software as an example of the sort of data-management program mentioned as lacking by Atwood. "The tools and technologies that they require is very much the starting point of the things we develop."

David Chiang, CEO of bioinformatics company Sage-N Research, suggested, however, that mass spec customers might not yet fully realize the data-management challenges the new machines will present.

"I don't know that anyone fully realizes how much data they're dealing with," he told ProteoMonitor. "The instrument companies are mostly talking about throughput, the amount of data per hour that you're getting. But what do you do with the data?"

"Most of the mass spec vendors are developing relatively simple PC software," he said. "It'll get you going if you're not running big experiments, but it's really the entry level."

Chiang suggested proteomics researchers look to the financial industry for models of data management and analysis.

"The hedge fund guys are actually really mature about data analysis," he said. "What they do – which is a great model for post-genomic biology – is this: On one side is the backroom with big servers and storage systems that store the data and have proprietary software that slices and dices the data every which way. On the other side are the high value traders with deep knowledge of currency fluctuations and international commerce."

Applying Chiang's model to mass spec, the traders are the researchers while the backroom is handled by companies like Sage-N Research, which, Chiang said, offers "big Linux boxes [via its Sorcerer platform] running heavy duty, production-class data analysis software."

[ pagebreak ]

Ultimately, Chiang said, the challenge is to provide platforms robust enough to handle the large amounts of data generated by qual-quan machines but simple and convenient enough for non-experts to use effectively.

"You're a biologist. You want to focus on the science. To learn about installing software and backing up data is waste of your time. You're too valuable for that," he said.

Gostick said that much early feedback AB Sciex has received regarding the TripleTOF 5600 has focused on this issue of usability.

"Customers are looking more and more to the software to be able to generate answers and results in a one-click simplification of workflow," he said. "Our ideal goal is to make these instruments as walk-up as they can possibly be."