Skip to main content
Premium Trial:

Request an Annual Quote

Merck Sees Promise for Mass Spec-Based Protein Biomarker Pipeline in Drug Development

Premium

This story originally ran on Oct. 7.

By Adam Bonislawski

Although researchers have had difficulty applying mass spectrometry-based protein biomarker assays to the clinical setting, at least one pharmaceutical firm believes it can use the technique to aid decision-making during drug development.

At Cambridge Health Institute's Accelerating Development & Advancing Personalized Therapy conference last month in Arlington, Va., Daniel Spellman, Merck senior research biochemist for proteomics, provided an overview of the drugmaker's mass spectrometry workflow for protein biomarker development.

In his presentation, Spellman emphasized the importance of the sample prep and bioinformatics portions of the company's biomarker development pipeline and said that even though such assays have not made rapid inroads into the clinic, Merck believes it can use the technique "to make decisions during the development process of drugs."

"It might not necessarily be an in vitro diagnostic that's going to hit the market," he said of the company's mass spec-based biomarker assays, "but it can certainly help us make decisions as we move a drug to market."

Spellman focused in particular on Merck's use of differential mass spectrometry, wherein researchers quantitate ions detected in full scan mass spectra of samples from various treatment cohorts and look for differences between the groups.

The technique, he said, has the advantages of not requiring chemical labeling, sample pooling, or antibodies, while allowing for "complex, multifactorial experiments." He also noted that researchers at the company have been able to quickly translate assays developed with animals to humans – in part because reagents like antibodies aren't required.

Because of the relative simplicity and quickness of the workflow, Merck researchers have also been able to "match [biomarker development] timelines to pharmaceutical research and the development of drugs," Spellman said, allowing them to "actually impact decisions."

Spellman cited Thermo Fisher's LTQ Orbitrap and LTQ Orbitrap Velos as the instruments most typically used in the workflow. More than the specific instrumentation involved, however, he stressed the importance of establishing the stability and reproducibility of the pipeline's sample prep and LC/MS platforms.

"[Sample prep] looks like a very small step, but this is actually one of the most important steps," he said. "You have to spend a lot of time here making sure that your sample prep is reproducible in order for this label-free differential proteomics approach to work."

Typically the sample prep involves immunodepletion of abundant proteins followed by additional techniques to enrich for specific subpopulations of interest like phosphopeptides or glycoproteins, Spellman said.

The workflow also requires careful quality assurance and quality control to make sure the instrumentation remains stable across experiments, he said, noting that "establishing an LC-MS platform that is stable and reproducible in both retention time and intensity … is really important and not trivial."

"We go to great measures to run QA-QCs to make sure our instruments are stable, that our chromatography isn't drifting over time," he said. "We'll throw out experiments if they don't meet this criteria, and we prepare the instruments for as a long as it takes to get this to work."

Data analysis is key, as well. According to Spellman, a single cerebrospinal fluid sample typically generates around 4,000 to 5,000 high-resolution spectra, with the spectra containing roughly 10,000 peptide ions.

"We're interested in digging very deep into this in order to distinguish features between treatment groups and determine relative numbers, so another very important part of this workflow is our comprehensive set of analysis tools," he said.

His group does much of its data analysis on the Elucidator 3.5 software platform, he noted. Developed by Merck in conjunction with its former subsidiary Rosetta Biosoftware, Elucidator "is really an end-to-end software platform for handling proteomics data – mass spec data visualization, data mining, statistics, visual scripts, supports for a number of plug-ins," Spellman said.

He suggested that the continued availability of the platform may be in doubt, however, due to the June 2009 sale of Rosetta Biosoftware – including the Elucidator platform – to Microsoft. A month after the purchase, Microsoft announced that it would no longer sell Elucidator and that it would cease support of the software in July 2011(BI 06/05/2009).

In August 2009, Merck contracted with IT services firm Ceiba Solutions for software maintenance and support of Rosetta Biosoftware products including Elucidator. Ceiba's contract for support of Elucidator ends in 2011, however (BI 12/11/2009).

Upon purchasing Rosetta, Microsoft announced that it would be wrapping the company's genetic, genomic, metabolomic, and proteomics data-management software into its Amalga Life Sciences platform. It also announced that under the terms of the sale Merck would become a customer of its Amalga Life Sciences 2009 platform and would also "provide strategic input to Microsoft on the direction and evolution of new solutions incorporating Rosetta Biosoftware technologies."

As Spellman made clear, however, the Elucidator platform remains an important part of Merck's biomarker development workflow.

"I can't speak to the future availability of the [Elucidator] software," he said, "but it is currently supported, so we have our fingers crossed on that."

Using Elucidator, the researchers identify differentially expressed proteins, from which they build a list of candidate biomarkers. They then qualify these biomarkers via multiple reaction monitoring assays developed on a triple quadrupole instrument – a method Spellman said has the advantage of being a "very specific and sensitive approach" to quantitation that also allows for multiplexing.

"We think we have here a workflow for translating protein biomarkers from discovery to what is a pretty robust quantitative assay that is amenable to measuring large numbers of clinical samples," he said.

[ pagebreak ]

Spellman also highlighted top-down proteomics work Merck is doing as part of its biomarker development efforts – work he said had been enabled by the development of electron transfer disassociation ion fragmentation, which he noted allows for more effective sequencing of intact proteins and larger polypeptides than typical collision-induced disassociation.

In particular, ETD "often retains post-translational modifications," which allows researchers to examine the differential expression of a protein's different isoforms, he said. He cited as an example work the company had done investigating an o-glycosylated form of apolipoprotein C-III as a biomarker for coronary artery disease.

"What's interesting and what you might miss from a more bottom-up approach is that not all isoforms of this protein are actually changing," he said. "This glycoform is changing at a much greater level than the unmodified protein, which really doesn't change in any statistically significant fashion. If you were to digest this protein you may not get that information unless you were lucky enough to identify the modified peptides."

"We think that this is a really unique and potentially promising approach for studying functionally discrete protein isoforms," he said.