Skip to main content
Premium Trial:

Request an Annual Quote

For Many Researches, HUPO Protein Test-sample Study Underscores the Difficulty of Proteomics

Premium

Proteomics is really hard.

As the research community begins to digest the results of a test-sample study published last week by the Human Proteome Organization, the message for many scientists is that even armed with the latest instruments and tools, their work is more painstaking and difficult than it can sometimes seem.

Last week, HUPO published the results from its test-sample study in which just seven out of 27 laboratories were able to identify 20 proteins in a sample mix and just one lab was able to identify all 22 peptides with a mass of 1,250 Daltons [see PM 05/21/09]. However, in a centralized analysis of the raw data, HUPO found that all of the proteins had actually been detected by all of the labs, and a majority of the 1,250 Da peptides had been detected in all 27 labs.

While not everyone found the study results encouraging — some researchers contacted by ProteoMonitor said they had misgivings about the study but declined to further elaborate because they did not want to publicly criticize HUPO's work — those who did comment called it a useful exercise that, if nothing else, served as a level-headed reminder of just how difficult proteomics is, even for the most experienced scientists.

Given that proteomics still can claim no medical breakthrough, is still regarded by funding agencies as an experimental field, and is on the outside looking in on the clinical world, that seems like a statement of the obvious, but as one researcher said, the field sometimes loses sight of that.

According to Tom Neubert, director of the New York University Protein Analysis Facility and associate professor of pharmacology in the Skirball Institute of NYU School of Medicine, sporadic examples of labs reporting seemingly "spectacular" results — which may or many not pan out — "sets expectations very high in the field, [but] this stuff is not as easy as everybody thinks."

The 27 participating labs included some of the most renowned proteomics facilities in the world, and still only about one-quarter of them were able to find all the proteins in a relatively simple protein mixture — a fact that underlined the difficulties of the work, said Richard Smith, director of proteomics research at the Biological Sciences Division at Pacific Northwest National Laboratory.

"It is a tremendous challenge for a small laboratory, in particular, to meet all of the challenges, starting right from the beginning, handling samples and processing them and doing reliable chromatography, doing reliable quality mass spectrometry, [and] the bioinformatics analysis. … There are so many places that can go wrong, it leads to these kinds of results."

The study is a result of an initiative started nearly three years ago by HUPO to create a protein-standard mixture [see PM 07/20/06] as a benchmark for the industry. In 2008, Invitrogen commercialized the protein-standard mixture that resulted from the study.
But HUPO decided to further its initiative by exploring why so many labs had trouble characterizing the proteins sent out in the test samples as part of the protein-standard mixture, and to address errors that were made in order to raise the level of the work.

At first blush, the collective results give reason to pause. As Smith said, that so few labs were able to identify all the proteins in the test sample was "disconcerting. … I would have hoped for better."

But he and others also said that the results were not entirely surprising. Indeed, earlier studies, including ones done by the Association of Biomolecular Resource Facilities, have presaged the HUPO study results [see PM 02//3/06 and 04/05/07].

The National Cancer Institute's Clinical Proteomic Technology Assessment for Cancer program, or CPTAC, also has been exploring proteomics technology and sources of variability in experiments [see PM 09/04/08].

Daniel Liebler, a professor of biochemistry, pharmacology, and biomedical informatics at the Vanderbilt University School of Medicine, and who is involved in CPTAC's efforts, said in an e-mail that the HUPO study raises "interesting questions about the importance of technology and method variables versus bioinformatics tools in analysis of simple protein mixtures."

[ pagebreak ]

In CPTAC's own initiative, researchers have found substantial variations in peptide identifications from a simple protein mixture, much like HUPO's study, and led to additional investigation to evaluate sources of variation due to sample preparation, chromatography, instrument settings, and data analysis methods.

CPTAC will soon report its results, which indicate that multiple system components contribute to variability, "often in unanticipated ways," Liebler said. "The use of iterative study designs and [standard operating procedures] to identify sources of bias and variability will ultimately result in proteomics platforms with performance characteristics that meet the requirements of a biomarker development pipeline."

NYU's Neubert said that in studies such as HUPO's where "a sample is handed out to a large number of labs, performance is always far below what people expect it to be … especially when there's a task that's very specific on paper [that] appears to be easy," Neubert said.

Because the nature of the technology being used here is stochastic, "there is an element of chance that any given peptide will be sequenced in an experiment like that. So sometimes a peptide will get sequenced and other times not. It depends on whom its neighbors are, for example. It depends on the decision that the mass spec makes," he added,

The HUPO authors concluded from their centralized analysis that the participating labs "had in fact generated mass spectrometry data of very high quality," and attributed missed identifications to other factors such as false negatives, environmental contamination, database matching, and curation of protein identifications.

Alexander Ivanov, a research scientist at the Harvard Public School of Health and director of the Harvard Proteomics Resource, described HUPO's interpretation of the results as "pessimistic." When naming errors and complete matching of tandem MS spectra due to acrylamide alkylation, both of which he called "quite minor," are taken out of the equation, "almost everybody did well," he said.

Paul Rudnick, a researcher at the National Institute of Standards and Technology, added that he wouldn't expect any better results from 27 DNA sequencing labs that were given a short-read run and a database containing splice variants, and then were evaluated based on the similarity of the genes they chose as partially sequenced.

"'Round robin' experiments are notoriously difficult if the experimental variables are not minimized to whatever extent is acceptable," he told ProteoMonitor in an e-mail. "In this report, the authors showed this to be true with the re-analysis of the data through a common pipeline, also demonstrating that data processing and analysis methods are far from uniform across labs."

And though there seems to be little improvement in the performance of proteomics labs in such studies year to year, study to study, researchers said that the field is moving forward.

"The quality, [the] level of outcome of proteomics is higher and higher and the quality of the data is better and better," Ivanov said. "And now instead of reporting just a set of detected proteins, people are actually trying to address biological issues and biological questions using proteomics. And now the approaches are [more] capable of providing reliable data."

Proteomic Pitfalls

For some, especially more experienced researchers, the problems encountered by the individual labs were not a surprise. As a reviewer for journal publications, Smith said that he encounters many of the issues described in the HUPO study, and added that he and the researchers in his lab "at least worry about the issues." But because more labs are doing proteomics work now and are "still in the process of growing up," the study points out traps that need to be avoided.

[ pagebreak ]

One source of errors that HUPO officials identified as especially glaring was the quality of bioinformatics tools. "The search engines used in this study at present cannot distinguish among different identifiers for the same protein, deriving from the way the databases are constructed," the authors of the HUPO study, published May 17 online in Nature Methods, said. "Indeed, the search engines used either for the centralized data analysis or by the individual labs suggest an erroneous confidence to the assignments of peptides and proteins." Search engines used in the study included Mascot, Sequest, Spectrum Mill, IdentityE, ProteinPilot, and X! Tandem, among others.

For Neubert, that finding was unexpected. "I thought the databases and the software were actually quite reliable by now and it would be a problem of data," he said, so "now the people who are in charge of curating the databases know what to fix."

According to Rudnick, proteins appear in databases in many forms and in order for the same protein to appear as the same entry in multiple databases, it has to be completely identical. Though this can be limiting, he suggested that proteomics researchers, at least in some cases, can address the problem with a bit of extra work. Comparing the situation to a phone book with many listings for "Smith," he said. "Clearly, one must try hard to deduce more information in order to identify the correct person, and at the same time, not report all 'Smiths' as the same person."

He and others also said the study underscores the need for continued work in developing standards in proteomics. The study highlights "the importance of the work done by groups such as HUPO's [Proteomics Standards Initiative], who are developing open representations of proteomics data formats," NIST's Rudnick said. "It is also important that this group and the community continue to evolve those standards and make them accessible with tools."

From the vendor side, companies can and are continuing to develop protocols and controls "to make sure your instrumentation is working properly," according to Aran Paulus, R&D manager for new technologies and applications in the Laboratory Separations Division for Bio-Rad Laboratories.

However, said Smith, based on the results of the HUPO study, the bottleneck right now is not with the instruments, but with the people using them.

"I'd have to say the instrument manufacturers have done their part. …It's an enormous challenge however, to bring together all the different pieces that are needed to do proteomics really well," he said. "And that looks like it's going to be something we're gong to have to work on for a number of years."

The Scan

Push Toward Approval

The Wall Street Journal reports the US Food and Drug Administration is under pressure to grant full approval to SARS-CoV-2 vaccines.

Deer Exposure

About 40 percent of deer in a handful of US states carry antibodies to SARS-CoV-2, according to Nature News.

Millions But Not Enough

NPR reports the US is set to send 110 million SARS-CoV-2 vaccine doses abroad, but that billions are needed.

PNAS Papers on CRISPR-Edited Cancer Models, Multiple Sclerosis Neuroinflammation, Parasitic Wasps

In PNAS this week: gene-editing approach for developing cancer models, role of extracellular proteins in multiple sclerosis, and more.