Skip to main content
Premium Trial:

Request an Annual Quote

The Decade In Proteomics: Most Significant New Applications

Premium

This is the first of four articles surveying leading proteomics researchers about the most notable achievements in proteomics during the 2010s. Part 2 can be found here, part 3 here, and part 4 here.

NEW YORK – The last decade was an eventful one for proteomics as the field saw a number of advances ranging from new workflows and applications to improvements in instrumentation and bioinformatic innovations. To get a sense of the key developments, we asked leading researchers in the field for their picks of the most notable achievements in proteomics throughout the 2010s.

Not surprisingly, their selections covered a broad swath of proteomics research, painting a diverse picture of the decade's activity. At the same time, several frequently cited items emerged as themes, which we are covering in a series of four articles, focusing on the most significant new applications, instruments, informatics tools, and large-scale projects from the last ten years.

We start with new applications, a category that encompasses many of the selections of the scientists surveyed. That perhaps reflects that fact that during the 2010s, proteomics moved from something of a niche discipline to a tool used widely throughout life sciences research.

"I think the most important [development] is the full move of proteomics into biology, and its massive insight into novel mechanisms," said Jennifer Van Eyk, principal investigator for research and director of the Advanced Clinical Biosystems Institute in the Department of Biomedical Sciences at Cedars-Sinai.

This impact, she said, "is hard to quantify in a single paper," but she cited as an example the use of mass spec-based proteomics by Johns Hopkins University scientist and 2019 Nobel Laureate Gregg Semenza in his lab's hypoxia research.

Van Eyk also cited research by Scripps Research Institute professor John Yates III that has used proteomics to characterize the interactome of the cystic fibrosis transmembrane conductance regulator (CFTR) protein, contributing to the development of therapies for this condition.

Additionally, she noted work by her lab that used proteomics to unravel some of the mechanisms underlying the function of the hormone NT-proB-type natriuretic peptide (BNP) that she said had helped restart clinical trials in heart failure that were failing due to a lack of understanding of these mechanisms.

Max Planck Institute of Biochemistry professor Matthias Mann likewise emphasized proteomics' move during the decade from a specialized research tool to an approach with broad impact in the life sciences, though he focused specifically on the clinical space, highlighting proteomics' growing relevance in this area as the key advance of the last ten years.

"My pick would be the 'translational turn,' the fact that proteomics has now really shown that it is ready for the clinic," he said. "Towards the end of the decade, we had the first convincing clinical proteomics studies with serious biomarkers being found for the first time. So, looking back, people may say that that was the major thing that happened in the 2010s."

In 2017, Mann and his colleagues proposed a new approach to protein biomarker development using improvements in instrument technology to measure on the order of thousands of proteins in large patient cohorts, both in the discovery and validation phases of a biomarker project. This approach, which they termed "rectangular" biomarker development, contrasts with traditional biomarker development methods, which have typically used small cohorts profiled extensively during the discovery phase followed by more targeted analysis of potential markers in large cohorts during the verification and validation phases.

Such an approach relies in particular on high throughput, an area where proteomic workflows have seen significant gains in recent years.

Leigh Anderson, CEO of SISCAPA Assay Technologies, cited this increase in throughput as a key development.

"From my perspective, the publication of more studies with 1,000-plus samples is a major indication of progress," he said. He was more circumspect, however, about whether this would translate into true clinical advances.

"I'm not sure how much of a watershed these represent," he said. "It could be one more iteration of discovery proteomics — like the last 30 years — without serious follow up and clinical impact."

Henry Rodriguez, director of the Office of Cancer Clinical Proteomics Research at the National Cancer Institute, cited another emerging proteomic application that has the potential for substantial clinical impact.

"In my opinion, the most significant development over the last ten years is its unification with genomics," he said, adding that he believes this approach, called proteogenomics, will play a key role in precision oncology.

Proteogenomics is the centerpiece of the NCI's Clinical Proteomic Tumor Analysis Consortium (CPTAC), which Rodriguez leads. Driven by technologies like next-generation sequencing and improvements in the breadth and quality of proteomic data, the approach aims to integrate both protein and nucleic acid data, in the hope that combining multiple levels of molecular information will enable better understanding of biological and disease processes and improve biomarker discovery and development.

For instance, while genomic studies have discovered a large number of genetic changes in cancer tissue, it is difficult to assess which are meaningful and which have little or no biological relevance. Proteogenomics can potentially aid such efforts by adding proteomic data to the mix. The hope is that those data can help identify which genomic aberrations are ultimately translated into changes at the protein level, with the assumption being that such changes are more likely to be of significance than those that do not lead to protein alterations.

Proteogenomics saw rapid advances during the decade, including its incorporation into major clinical research projects like the APOLLO (Applied Proteogenomics OrganizationaL Learning and Outcomes) initiative, a partnership between NCI, the Department of Defense, and the Department of Veterans Affairs that Rodriquez said aims "to create the nation’s first healthcare system where cancer patients will be routinely screened for genomic abnormalities and proteomic information with the goal of matching their tumor type to a specific targeted therapy."

"Undoubtedly, as technology improves, the integration of proteomics with genomics and other omics such as metabolomics is not only desirable, but more promising than ever — necessary for the progress of discovery and adoption in precision medicine," he said.

Stanford University professor Michael Snyder cited as a 2010s highlight another clinical development — the use of proteomics for large-scale, longitudinal health monitoring projects. One of the leading researchers in this area, Snyder is currently collecting annual samples for his  Human Personal Omics Profiling (hPOP) project, through which he aims to collect proteomic and other data from attendees of the Human Proteome Organization annual meeting. He and his colleagues enrolled 106 participants at the 2016 meeting, 115 at the 2017 meeting, and 90 at the 2018 meeting.

In terms of clinical potential actually fulfilled during the decade, it is hard to top the developments seen in MALDI mass spec-based microbiology, as Bruker and BioMérieux both introduced clinical systems that have made MALDI-based microbial identification a standard method in hospitals and clinical labs around the world.

This was the selection of Michelle Hill, head of the Precision & Systems Biomedicine Laboratory at the QIMR Berghofer Medical Research Institute. "Proteomics has revolutionized clinical microbiology in the last decade," she said.

Scripps' Yates highlighted an application that only emerged near the decade's end, single-cell proteomics.

"New processes … have started to make single-cell proteomics on cells of interest like cancer cells a reality," he said.

The technique is still in its infancy, with one of the first reports of large-scale mass spec-based protein measurements in single cells appearing in 2017. Since then, interest in the approach has grown rapidly. A few months ago, researchers at Pacific Northwest National Laboratory published a high-throughput workflow for single-cell proteomics that is able to identify up to 2,300 proteins per cell and analyze around 100 cells per day.

Immunoassay-based methods aimed at measuring relatively large numbers of proteins at the single-cell level also saw significant advances during the decade. This was particularly true on the commercial side, with companies including Fluidigm, NanoString Technologies, IsoPlexis, Akoya Biosciences, Zellkraftwerk, IonPath, and Mission Bio launching tools for single-cell protein analysis during the last decade.

Christoph Borchers, a professor at McGill University and director of the Segal Cancer Proteomics Centre whose lab is heavily involved in clinical work, selected an application that is still rooted in more basic research.

"Tremendous progress has been made in the area of structural proteomics," he said. "Numerous tools, approaches, and reagents have been developed, making structural proteomics now accessible and feasible for every proteomics laboratory in the world."

The use of mass spectrometry combined with methods like peptide crosslinking and hydrogen/deuterium exchange (HDX) has allowed proteomics techniques to complement traditional structural research methods like X-ray crystallography and cryo-electron microscopy.

This has opened up the possibility of observing not only how protein expression or isoforms change in response to various stimuli or disease states, but also how protein structures change, which is key to better understanding their functions within organisms.

For Olga Vitek, a professor at Northeastern University and an expert in the statistical methods underlying proteomic analyses, the key takeaway from the decade was less any discrete development and more how a series of advances came together to enable higher throughput and higher data quality.

"The big [one] there is the simultaneous improvements in the quality of the data and the scalability of the experiments," she said. "New instruments, new workflows, and the associated software all had to come together to achieve that."