We humans are obsessed with the future. It’s likely a benefit of being equipped with a dandy neocortex — you don’t really see other fauna making to-do lists and planning for retirement. But scheduling your week is one thing; making predictions about natural phenomena, like the weather, is quite another. Natural systems are just plain tough to predict, and those who give it a shot run the hazard of being wrong more often than not.
But not all predictions are equal. The trick is in making mistakes as quickly as possible within a testable framework. That said, diseases and even the bodies that harbor them are maddeningly heterogeneous, and the search for verifiable markers of illness or drug response is one that continues to be riddled with difficulties.
Biomarkers — broadly used here to refer to any characteristic that can be measured to reflect physiological, pharmacological, or disease processes in animals or humans — may be discovered at the bench, but getting them to the clinic is another story altogether. That is, finding a useful biomarker is necessary, but hardly sufficient, for getting it validated and eventually approved by regulators for clinical practice.
The challenges facing biomarker researchers are many, and can range from technical constraints to conceptual fuzziness to bumps in the road to regulatory approval. “In a general sense, there is not a pipeline for moving biomarker discovery to a standard clinical test,” says Leigh Anderson, founder and CEO of the Plasma Proteome Institute.
The good news is that plenty of people are taking a critical look at this imperfect path to accelerate biomarker development and, presumably, to achieve part of the much-heralded vision of personalized medicine. Academics are wrestling with basic scientific problems in the discovery and validation of effective markers, as are biotechs, while pharmas are shuffling their organizations to better foster development of biomarkers for both internal decision-making and eventual clinical use.
Read on for lessons from a slew of stakeholders in the field of biomarker research. GT talked to thought leaders, both of the academic and industrial stripe, to learn what is being done (and undone) to get clinically validated biomarkers ready for primetime.
A bench with a view
If you want to hear something interesting about what constitutes a winning strategy for getting a biomarker from the bench to the bedside, talk to a proteomics researcher.
At this year’s US HUPO meeting, the topic partitioned the proteomics community into roughly two camps: those who maintained that efficient discovery of putative biomarkers is the real challenge, versus those who say that validation is in fact key. Lee Hartwell, president and director of the Fred Hutchinson Cancer Research Center, falls into the former camp. In his keynote address at the conference, he said that efficient discovery is the primary bottleneck in getting effective biomarkers to the clinic. Meanwhile, Martin McIntosh, also at the Hutch, holds that validation is the hardest part, whereas “the discovery is an engineering problem.”
According to McIntosh, the question comes down to what is really meant by the term validation. If validation means finding biomarkers that work, “then it’s tautologically defined,” he says. “For years, many people have been discovering potential markers,” says McIntosh, “but the difficulty is in evaluating their impact.”
Leigh Anderson also characterizes the discovery-versus-validation debate as one of semantics. “You could say that a biomarker candidate is discovered with proteomics,” he says, “but it’s possible to say that it’s really not a biomarker until you validate it. So, in a peculiar sense, the discovery actually occurs at the stage of validation.”
The business of finding those candidates in the first place ought to fulfill certain conditions at the start if a potential biomarker is to make it to the stage of validation. One key requirement is identifying a large enough set of plausible markers to begin with, as each stage of validation will winnow the candidates further. “If validation is a journey, then at every step you’ll never add things to the pile,” says McIntosh, “so you’d better start with more things than you expect to end up with in the end.”
McIntosh’s group makes heavy use of mass spectrometry and high-dimensional antibody arrays using phage, as well as other approaches, to find putative markers. The key advantage of using mass spec for this kind of work, he says, is that it has the potential to identify variance proteins — whether they’re splice forms, the result of a SNP, or translocation — in an unbiased way. “It could conceivably identify anything,” he says, but keeping the validation question in mind, he adds that “a cynic will tell you that the first thing you’ve got to do is identify something.”
The main challenges in validation, according to Anderson, are in the scale of the work that has to be done and the difficulty of assembling a large enough set of carefully selected samples. Moreover, because proteomics discovery platforms require a large amount of fractionation to be able to detect low-abundance proteins, many fractions need to be analyzed for each sample — not a small amount of effort per sample, and hardly cost-effective when it comes to the number of samples needed to do a validation study. This raises the prospect of switching platforms, or, according to Anderson, “taking candidates identified in the discovery process and making alternative, more high-throughput, lower-cost methods of measuring them in large numbers of samples to do the validation.”
Anderson has already made inroads on that front. In a recent paper appearing in Molecular and Cellular Proteomics, he and Christie Hunter of Applied Biosystems describe a mass spectrometric technique involving multiple reaction monitoring assays, or MRMs, used to measure specific peptides from proteins found in plasma. The MRM-based approach has the advantage of not requiring antibodies, which can cost $2 million to $5 million per protein to make for an FDA-approvable immunoassay, while increasing throughput in much less time.
Hunter describes the workflow, in which MRMs are built for peptides from proteins taken out of discovery, which then triggers a full-scale MS/MS scan, resulting in several pieces of information at once: retention time, the MRM peak, and the full scan MS/MS of the peptide of interest. After winnowing down a set of putative markers using this approach, Hunter says, “that’s when you can move into the validation phase, using perhaps ELISA assays or peptide quantification in the more traditional sense, and do that in an even larger number of samples.”
The technical issues may be daunting, but it will take more than better mass specs or more antibodies to find meaningful markers, especially those that correlate to early stages of disease. McIntosh points out that there are presently many markers that “can tell someone who already knows they have cancer, they have cancer”; but identifying effective markers for asymptomatic patients is another problem altogether. For one, the samples are hard to obtain. Another challenge is correlating a marker specific to a certain disease, with no false positives. “What we really want are biomarkers that can identify people with disease early and do it with a rate that is relevant for public health,” McIntosh says.
Mining, biotech style
Proteomics researchers certainly don’t hold a monopoly on thinking about and pioneering approaches to finding and verifying biomarkers. Likewise, there is certainly not a consensus on the best way to go about discovery and validation. Just as each lab has specific protocols passed down and amended from one postdoc to the next, different technology-specific biotechnology companies have established unique approaches to biomarker development.
Jorge Leon, acting chief scientific officer at Orion Genomics, has developed a five-step process by which epigenetic-based markers are discovered and validated. Thanks to the way in which Orion has scaled instruments and scientists, each stage takes about three months. According to Nathan Lakey, Orion’s president and CEO, Leon’s framework does away with the bottlenecks that have historically plagued both the discovery and clinical validation of novel markers. “If you have a bottleneck, you have a process problem,” Lakey says. The problem that he sees other companies running into stems from the fact that “the biomarkers that they start out with look promising, but in the end don’t pan out. As a result, they must do a larger and larger patient validation to see if they’re statistically significant.”
In this first phase, tumor samples and controls are subjected to genome-wide scanning of 110,000 unique loci. The results are used to generate a high-resolution methylation map for every gene, promoter, and site that may be an epigenetic lesion. This stage involves 10 tumors and 10 very well selected controls. The DNA from these tissues is prepared with demethylation enzymes, and is then hybridized to an array. Once initial leads are ascertained, individual assays are developed for each, which are then tested against the original tumors and controls to confirm that leads are indeed differentially methylated.
The second stage involves expanding the first leads and developing individual assays against those found to be the most developed and robust. Tumors from a different source are then independently tested, this time on the order of 25 samples and 25 controls. Once tested and validated, the remaining “clinically validated leads” move on to the next stage, called “technical and clinical development.” At this point, more robust assays are performed on at least 100 clinical samples and 100 controls. The fourth phase involves the transfer of technology to another institution, one of Orion’s academic partners. A separate trial is performed at this stage, with a minimum of 250 patients and 250 controls.
“After we finish the fourth phase,” Leon says, “we consider the biomarkers validated and ready for prime time. Basically, we go over close to a thousand samples for validating a biomarker. We do it in a very well-controlled process that pays a lot of attention to the reproducibility of the assays.”
Lakey characterizes the approach used at Orion as “suspicion-blind discovery.” That is, instead of pre-selecting 100 or so genes based on the literature or bioinformatic annotation, their approach is to look at the entire genome, letting it reveal which loci are most correlated to the biomarker trait under investigation. “We think if you cast a wide enough net, you’ll have a fish that’s worth keeping,” Lakey says.
Underlying it all, however, is the conviction that the fundamental mechanisms that regulate gene expression are not just genetic, but are also epigenetic. As Leon puts it, “The new vision here is that both epigenetic and genetic changes work in a very orchestrated way to define the expression of the genome, which constitutes the substrate of biomarker discoveries.”
Leon’s belief in the power of epigenetics to fuel biomarker development is not unwarranted; other firms are already on target for commercializing markers based on methylation status. For instance, at last month’s American Association of Cancer Research meeting, Berlin-based Epigenomics presented data showing that a test checking the methylation of a single gene, PITX2, can predict recurrence of prostate cancer, defined as a rise in prostate specific antigen levels, in patients who have had their glands surgically removed.
Another epigenomics-based company, OncoMethylome, has already developed an assay that measures the methylation status of the MGMT gene in patients with glioblastoma mutliforme. The assay, initially identified by researchers at Johns Hopkins, is based on the hypothesis that down-regulation of the MGMT gene might be a significant predictor of tumor response to temozolomide, a cytotoxic, alkylating drug used for the treatment of glioblastoma in conjunction with radiotherapy.
According to OncoMethylome CEO Herman Spolders, the assay is currently undergoing a worldwide prospective study; meanwhile, the company itself has set up a collaboration and licensing arrangement with Schering-Plough, which markets temozolomide. Spolders sees a major advantage in using methylation markers, as compared to RNA, in the ability to use stored samples of routinely collected tissues for retrospective analyses.
Metabolomics provides another point of departure for biomarker development. Metabolon, a company that specializes in accurately measuring the spectrum of biochemical changes in a specific disease or pharmacodynamic process and mapping those changes to metabolic pathways, has been making progress on identifying diagnostic and prognostic biomarkers for amyotrophic lateral sclerosis (Lou Gehrig’s disease) in collaboration with Massachusetts General Hospital. “In our discovery studies of ALS disease, we’ve been encouraged by results and expect that we will be able to identify a small molecule biomarker for that particular disease and to launch that into a full validation study shortly,” says the company’s chief scientific officer, Mike Milburn.
Milburn has reason to be optimistic, as the company was recently issued a patent that broadly covers metabolomic methods used to identify the molecular profiles seen in ALS patients. These methods combine a mass spec approach with proprietary informatics for data analysis.
The main advantage Milburn cites in terms of metabolomics as a category for biomarker discovery, instead of transcriptomics or proteomics, is that small molecules have been commonly used as biomarkers before. (Think glucose for diabetes or serum creatinine for renal disease.) Unlike using genes or proteins for discovery, where the spectrum of possible candidates ranges from 30,000 genes to more than 100,000 proteins, Metabolon’s scientists believe the number of small molecules in humans to be more on the order of 2,500 to 3,500 — which Milburn says is “a significantly more accountable total number that you can keep track of with the appropriate technology and informatics.”
Beyond ALS research, Milburn says that the company’s technology is being applied to other disease categories and across multiple stages of research. Moreover, many of Metabolon’s clients are pharmaceutical companies looking for biomarkers of disease progression and drug efficacy, and for these clients Metabolon is also searching for markers indicating early toxicity effects of compounds. Even though he’s held his post at Metabolon for less than a year, Milburn has already observed what he calls “an increasing desire of pharmaceutical companies to discover earlier biomarkers for tox or drug effects.”
Corner office view
Milburn is right: pharmaceutical companies are eager to discover biomarkers, both for internal decision-making and for the creation of post-marketing diagnostic tests.
Yet even though these companies typically have a wide range of techniques and tools at their disposal, along with a broad remit to uncover biomarkers of diverse types, the requirements of discovery and validation remain in place. If anything, the process is complicated by the need to align biomarker development timelines with those of drug discovery. Perhaps even more significantly, pharma companies are starting to change the way they operate to facilitate communication and collaboration between teams working on the same problems.
Nic Dracopoli, vice president for clinical discovery technologies at Bristol-Myers Squibb, says that the company deals with a broad range of markers, from the standard genomics and proteomics discovery platforms to more routine clinical assays. As opposed to casting the biomarker development problem wholly in terms of discovery or validation, Dracopoli says that “the biggest issue is actually driving [biomarkers] into clinical practice — it’s access, it’s collecting the samples, it’s getting the appropriate clinical trial amendments and informed consent to approve the biomarker discovery work.”
It’s also about organization. About three years ago, BMS decided to close its departments of clinical assay development, pharmacogenomics, and proteomics in order to integrate the people and functions in those groups with the entire drug-development process. To that end, the company created therapeutic area teams that are responsible for everything from biomarker discovery to clinical development. These teams are included in a department of clinical discovery, which Dracopoli says is “really an exploratory development group responsible for Phase I to Phase IIa studies.” Once a compound emerges from initial discovery, it will go through these phases in order to achieve a first proof-of-concept in human.
“The idea is to use these studies as a filter for evaluating candidate markers, developing human assays, testing them in early human studies to see whether they’re useful, filtering through and picking the ones that are, and moving these forward into larger Phase IIb and III studies,” Dracopoli says.
Wyeth Research has done something similar with its structure. Andrew Dorner, senior director of molecular profiling and biomarker discovery, says that several years ago the company saw that biomarkers ought to be identified alongside compounds moving toward the market. To that end, Wyeth built an initiative called “translational medicine” that bridges the discovery to clinical settings. “In the old days, after discovery would discover something, it would go over the wall into clinical and that would be the end of the story,” Dorner says. “Now we move with that compound … and have discussions with the clinicians early to ensure that what we’re moving forward is going to work in the clinic to the best of our knowledge with the appropriate biomarker assays.”
In terms of stumbling blocks for developing potential biomarker assays, Dorner emphasizes the need for validation, robustness, and reproducibility of the discovery platforms themselves. Establishing standards for newer technology platforms — such as what MIAME does for microarrays — and the technical underpinnings for some of the biomarker assays is therefore a major issue. “Even though the initial exploratory work looks promising, a lot of technical work still needs to be done,” Dorner says.
Don Black, global head of research and development at GE Healthcare Biosciences, also brings up technology hurdles to achieving a reliable biomarker program. GE’s specialty in developing imaging markers presents unique problems, such as the need to conduct standardized, multicenter PET scanning studies.
But technical challenges are typically surmountable, Black says. A more nuanced, and perhaps more difficult, bottleneck in validation is the need for guidelines to define what standard is necessary to get a product on the market. Demonstrating correlation can be done to varying degrees, but strict causality is not exactly achievable within the limits of finite populations. As Black puts it, “What is the standard truth and how close do you have to get to perfection?”
These issues may never be completely solved, but the fact that so many researchers are thinking about them bodes well for the near future of biomarker development. That said, there can be no promises made in this field. “It isn’t absolutely necessary that there is a diagnostic in blood or in the genome for every given disease,” says Leigh Anderson. “It ought to be true, we’d certainly like it to be true, but we really don’t know how powerful this will be.”
FDA’s Critical Path: the regulatory landscape
Pharmaceutical companies large and small are pursuing active agendas to develop biomarkers. As Leigh Anderson points out, part of this trend might be traced to “the FDA’s increasing interest in seeing biomarkers come along because they’re good for diagnostics and also help the drug pipeline.”
Developing biomarkers also makes sound economic sense. According to the FDA, nine out of 10 experimental drugs currently fail in clinical studies because researchers cannot accurately predict the effects of compounds in people based on in vitro and animal studies. Stephen Williams, Pfizer’s head of global clinical technology, says that one of the biggest drivers of the company’s biomarker efforts was “an understanding of the portfolio economics of attrition — that the costs of 90 percent of drug candidates that fail can exceed the revenues from the successes.” Once this was grasped, he says, Pfizer determined to “make failure cheaper through biomarkers.”
The will on the part of pharma is definitely there, as evidenced by the roster of companies participating in the FDA’s Predictive Safety Testing Consortium, a public/private partnership between several large pharmaceutical companies and the Critical Path Institute, a nonprofit organization founded by the FDA, the University of Arizona, and SRI International. The project’s initial pharma partners include Bristol-Myers Squibb, GlaxoSmithKline, Johnson & Johnson Pharmaceutical Research & Development, Merck, Novartis, Pfizer, Roche, and Schering-Plough Research Institute.
The consortium’s goal is to foster the exchange of knowledge and resources between companies, which will in turn share details on the methods each has cultivated for specific types of assays. By testing each other’s tests, members hope to establish reproducibility, which should lead to a better understanding of potential side effects before drugs enter clinical trials. The project may also go a long way toward reducing duplicated effort, which is a real risk with so many companies gathering proprietary data on similar problems.
The results of the industry-wide comparison will be summarized by the Critical Path Institute for submission to the FDA. The agency will have the final say on those methods found to be reliable and reproducible, and will be used to form the basis for official guidelines regarding which safety tests should be used in the drug development process.
The Vocabulary of Biomarkers
Biomarkers can be thought of as falling into three general categories: clinical measure markers (such as weight as an indicator of obesity, increased cholesterol as a hallmark of cardiovascular disease), imaging markers (labeled antibodies, radionucleotides used in PET imaging), and molecular markers (DNA, RNA, protein, metabolites, etc.).
All three types of biomarker may have different functions, as described below.
|Role||Key activity||Sources of discovery||Validation endpoints|
|Diagnostic||Differentiates healthy from a specific disease state||Expression analysis, epidemiology studies, mechanism of disease studies||Correlation with a specific disease state|
|Prognostic||Predicts the likely course of disease||Expression analysis, epidemiology studies, mechanism of disease studies||Correlation with a clinical outcome|
|Stratification||Prior to administration of a drug compound, predicts which patients will respond or suffer from adverse effects||Pre-clinical studies, clinical trials||Correlation with a clinical response to a specific drug in controlled clinical trials|
|Pharmacodynamic/ Pharmacokinetic||Tracks a drug’s in vivo activity at different concentrations||Metabolite analysis, animal models||Correlation with the concentration or activity of a drug in animal and human studies|
|Efficacy/outcome||Monitors the beneficial effects of a specific drug on an intended target or condition||Molecular targets, clinical trials||Correlation with the activity of a drug in clinical trials with placebo controls|
|Toxicity||Indicates potentially harmful effects of a drug on any unintended cellular processes, cells, tissues, or organs||Toxicology studies, immunohistochemistry, clinical trials||Correlation with the concentration or activity of a specific drug in clinical trials|
L Anderson and CL Hunter
Quantitative mass spectrometric MRM assays for major plasma proteins.
Mol Cell Proteomics. 2006 Apr;5(4):573-588.
ME Hegi, et al.
MGMT gene silencing and benefit from temozolomide in glioblastoma.
N Engl J Med. 2005 Mar 10;352(10):997-1003.
Y-F Hu, et al.
From traditional biomarkers to transcriptome analysis in drug development.
Curr Mol Med. 2005 Feb;5(1):29-38.
SJ Wang, et al.
Retrospective validation of genomic biomarkers – what are the questions, challenges, and strategies for developing useful relationships to clinical outcomes – workshop summary.
Pharmacogenomics J. 2006 Mar-Apr;6(2):82-8.