Skip to main content
Premium Trial:

Request an Annual Quote

Panelists Recommend Steps to Overcome Mass Spectrometry’s Reproducibility Problem

Premium
A panel of researchers from academia and government said that scientists looking to limit reproducibility problems in their proteomics research should use standard operating procedures, run pilot studies ahead of full-blown ones, and rely on the consistency afforded by robotics.
 
The recommendations came during a panel discussion held by Science this week to discuss reproducibility in mass spectrometry-based biomarker discovery work, a key problem in proteomics research.
 
Speaking at the session were Martin Latterich, who currently holds a faculty position jointly at the Montreal Heart Institute and the University of Montreal department of pharmacy; Toni Whistler, senior service fellow in the Chronic Viral Diseases Branch of the US Centers for Disease Control and Prevention; and Timothy Veenstra, director of the Laboratory of Proteomics and Analytical Technologies, SAIC-Frederick at the National Cancer Institute.
 
Reproducibility, or the lack of it, is an issue well documented and much discussed in the proteomics field and any discussion with a researcher about the obstacles and bottlenecks in proteomics is guaranteed to bring up the inability to reproduce each other’s work.
 
In 2002, in an attempt to address the problem, the Human Proteome Organization started its Proteomics Standards Initiative to help scientists observe and replicate their colleagues’ research. Another HUPO initiative, the Industry Advisory Board, is preparing a paper aiming to debunk a long-held view that 2D-gel experiments are not reproducible [See PM 11/15/07].
 
Standard Fare
 
According to NCI’s Veenstra, one challenge of reproducibility in biomarker discovery is brought on by a lack of standardization in sample collection, which makes it difficult for researchers to make even the simplest decisions about experiments, such as whether plasma or serum should be used, or how the freezing and thawing of samples could affect them.
 
While there are many methods of mass spec-based analysis for biomarker discovery, Veenstra’s lab has focused on subtractive proteomics, or shotgun proteomics, done on an LC-MS/MS platform. This approach has a number of advantages such as extreme sensitivity, a high dynamic range, and the capability to compare an unlimited number of samples to each other.
 
However, the approach also has several steps where irreproducibility can be introduced into an experiment, starting with sample preparation, Veenstra said. To deplete high-abundance proteins, his lab uses a number of techniques, including immunodepletion and molecular weight cut-off filters.
 
Numerous commercial products are available for high-abundance protein depletion, but “they all have a sense of providing some irreproducibility. There’s no standardization,” he said.
 
By the same token, tryptic digestion of proteins is a relatively simple and straightforward process, but different researchers use different times and protein-to-enzyme ratios that can create complications if someone tried to emulate an experiment.
 
“There’s no standard in the field right now,” Veenstra said. Rather, labs tend to develop their own, and the best ones tend to come from contract research organizations and clinical labs, he said.
 
Similarly, fractionation, mass spectrometry, and database searching all suffer from a lack of standards that compromise the ability of researchers to reproduce each other’s work, Veenstra said.
 
In mass spectrometry, standards-based causes of irreproducibility include the variety of platforms used today, which tend to generate different data for the same experiment, and undersampling, which results in large numbers of biomarker candidates being identified by a small number of peptides. That means that these biomarkers are not reproducibly identified in many of the samples.
 
“We can actually see many … differences in peptides identified in case versus control samples,” Veenstra said. “However, the confidence level associated with the differences is way too low to move forward to the verification and validation stage.”
 

“… I would say that I see as a big niche in the proteomics area: to come up with more creative ways to actually do sample preparation so that you remove the human element from the actual sample preparation, digestion, up to the injection into the mass spectrometer.”

Whistler said that in her current work, which uses SELDI-TOF technology to study chronic fatigue syndrome, she and her group implemented standard operating procedures to try to head off reproducibility problems.
 
“We use SOPs in all aspects of our analysis, from the start of our fractionation to the use of QC samples in every run; [in] our protein, peptide standards, [and] randomization of samples; [in] the number of specimens in a run per batch; [and in] the number of batches we can run per week,” she said.
 
The SOPs her group used were based on original training provided by Vermillion, then-called Ciphergen, the original seller of the SELDI instrument, Whistler told ProteoMonitor, in an e-mail. They then built their own protocols.
                       
“Much of the info we have used in the SOPs comes from the literature and our own experience,” she said.
 
According to the University of Montreal’s Latterich, the best places for researchers to obtain established SOPs are from labs where similar experiments have been run.
 
“Once you’ve obtained an SOP, the second-most important aspect is actually getting training by the lab that actually has provided the SOP,” he said. Veenstra added that contract research organizations and clinical laboratories, not academic ones, are the best sources for SOPs.
 
”Their life blood is on getting standard operating procedures in place,” he said.
 
Post-analysis, Whistler’s group monitors spectrum calibration, examines batch processing for sample bias, and selects high-quality spectra for datasets.
 
Pilot Error
 
Another way to overcome reproducibility problems is to perform smaller scale pilot studies before embarking on full-blown studies. For instance, the CDC’s Whistler said she and her colleagues ran an experiment looking for biomarkers linked to chronic fatigue syndrome from 227 subjects. But before that, they ran a pilot study with 60 samples, which allowed them to observe and fix any glitches they would encounter when they did their larger study.
 
While the panelists said that pilot studies can cost north of $100,000, not including capital equipment, Veenstra said they are “absolutely critical” to perform before conducting a full-blown experiment.                                                  
 
Latterich added that work should be done at least three times to ensure results are valid.
 
Other steps recommended by the panel included the use of robotics. Latterich said that when he worked in the private sector, including stints at Proteoype, Illumina, and Diversa, the use of robotics was commonplace and resulted in greater consistency in processes.
 
“If you have the same person do an experiment 100 times [and] you have a robot do the same experiment 100 times, you see a tremendous improvement in [the] coefficient of variance, which leads to reproducibility, and better reproducibility of data,” he said.
 
“And so I would say that I see as a big niche in the proteomics area: to come up with more creative ways to actually do sample preparation so that you remove the human element from the actual sample preparation, digestion, up to the injection into the mass spectrometer,” Latterich said.

The Scan

Billions for Antivirals

The US is putting $3.2 billion toward a program to develop antivirals to treat COVID-19 in its early stages, the Wall Street Journal reports.

NFT of the Web

Tim Berners-Lee, who developed the World Wide Web, is auctioning its original source code as a non-fungible token, Reuters reports.

23andMe on the Nasdaq

23andMe's shares rose more than 20 percent following its merger with a special purpose acquisition company, as GenomeWeb has reported.

Science Papers Present GWAS of Brain Structure, System for Controlled Gene Transfer

In Science this week: genome-wide association study ties variants to white matter stricture in the brain, and more.