Microarrays might be considered old standby tools in research labs, but in the clinic, they shine bright and new. Chips are reinventing themselves as diagnostic tools with a lot of potential. Microarrays offer clinical labs the same advantages they do in the research lab — the ability to scan thousands of genes or samples — but with a new goal in mind: helping to diagnose or treat a patient.
“It’s such a different concept than a research lab. [Microarrays] have great potential — basically they have made our life so easy in terms of just looking at the CGH chips and comparing it to the current techniques which are used for deletions and duplications,” says Madhuri Hegde, director of the Emory Genetics Laboratory, who has been working on identifying the genetic changes associated with muscular dystrophy and creating a diagnostic test based on those discoveries.
As researchers have known for quite some time, many diseases are not caused by just one gene. There are multiple markers for muscular dystrophy, different cancers, and coronary conditions, among many other diseases. “I think the trend now is that you will not find one single marker that will have a sufficient specificity and sensitivity. The trend is really towards multi-markers and, especially with microarrays, you have the possibility to look at 30,000 [or] 40,000 genes at the same time and measure their gene expression level,” says Dag Christiansen, vice president of marketing at DiaGenic, a Norwegian biotechnology company.
Heidi Rehm, a researcher at the Harvard-Partners Center for Genetics and Genomics, agrees. She is developing a number of array-based tests, including one for hearing loss and one for hypertrophic cardiomyopathy. “Because there are so many genes that all cause the same thing — hearing loss and that’s the same as the HCM — it’s much easier to develop a test that can test lots of different genes at the same time,” she says.
Microarrays offer even more than the vast number of genes they can scan. Controls for sensitivity, specificity, and accuracy — those all-important issues when moving in clinical circles — can be included right there on the chip. “We chose microarrays for a couple of reasons,” says Deborah Neff, president and CEO of Pathwork Diagnostics, a company focusing on oncology. “We knew we were going to need a lot of data and that we were going to end up looking at a high number of genes and that we were going to come up with a way to standardize across labs.” Her company’s Tissue of Origin Test includes a set of genes that are stable across the tissue types being tested so the analysis team can make adjustments for variability in processing or scanner differences.
Microarrays are getting people starry-eyed not only because they hold a lot of information, but also because they can be adapted to clinical settings without blowing the budget. “The chips are slightly more expensive, I would say. But I think with time, as more labs get into this technology, it is going to drive the price down,” Hegde says.
That predicted trend will be critical for tests like the one designed by Kathy Rowlen, a professor at the University of Colorado at Boulder. The aim of her FluChip is to quickly diagnose what form of influenza has struck a person, village, or population. To accomplish that, her test needed to be based on a fairly inexpensive and changeable platform. “[Microarrays] are easy to redesign if you need to change your primers or your sequences, and that can be done fairly quickly and inexpensively,” she says.
All this comes down to diagnosis. The theory is that, by using a microarray as a diagnostic tool, a physician would find out more about a patient’s particular brand of cancer or coronary condition, for example, and be able to treat the disease more effectively and efficiently.
But even as scientists race to get their biomarkers and panels of genes onto microarrays for use in the clinic, a dose of reality has come in the form of regulatory agencies and insurance payment concerns. Federal guidelines for approving these diagnostic tests are being drafted and tested at the same time that the diagnostics themselves are entering the regulatory approval process. For the few tests that have made it all the way to market, insurers are already balking about paying for them.
Still, these problems haven’t slowed the development of more microarray-based diagnostics, especially the in vitro diagnostic multivariate index assays. Before getting to the lofty goal of personalized medicine, microarray-based diagnostics still have a way to go before they’ll mature.
A Short History
It wasn’t long after the launch of Roche’s AmpliChip CYP450 test in late June of 2003 that the company found itself in a bit of trouble with the US Food and Drug Administration. The AmpliChip was the first microarray-based diagnostic test to wend its way into the marketplace — and while Roche thought it fell into a category of unregulated medical devices, the FDA disagreed. (For more detail, see sidebar below.)
The AmpliChip test genotypes the CYP2 and CYP3 gene variants to classify a patient as a slow or fast drug metabolizer. These liver enzymes are particularly active in breaking down certain psychiatric drugs and the beta blockers that treat cardiovascular disease.
As such, FDA argued that the AmpliChip test fell into one of the more regulated categories of either a class II or III medical device. FDA finally approved the AmpliChip as a class II device in December of 2004.
It was a few years until the next microarray-based diagnostic emerged from the regulatory process. Like the AmpliChip, Agendia’s MammaPrint uses microarrays as the basis for its test, but this diagnostic had the extra distinction of being an in vitro diagnostic multivariate index assay, a class of devices that evaluates different variables to come up with a patient-specific diagnosis — and one that FDA had never before reviewed. “They found in us a willing partner, if you wish, to sort of work with them,” says Rene Bernards, Agendia’s chief scientific officer. “I don’t think the FDA realized what an in vitro diagnostic test should look like to go through FDA clearance. So it was for them as much a learning experience, I believe, as it was for us.” Another stumbling block: FDA had never evaluated a gene expression array before, since the AmpliChip was a DNA array. In February of this year, MammaPrint won FDA approval as a class II diagnostic — becoming just the second array-based test to do so.
The Low-Maintenance Home-brew Test
Murky or nonexistent regulatory guidelines, however, are not keeping academics, biotech startups, and established companies away from the allure of developing a microarray-based diagnostic. Instead, it is an active area of concept and product development. Medical laboratories in particular have picked up on using microarrays as diagnostics, which the labs create and validate themselves and then offer as a service. These laboratory-developed, or “home-brew,” tests are made in CLIA- approved labs and are less regulated by the FDA. “Those are the tests that the lab develops themselves and uses within their lab without actually selling the kits to anyone else,” says Elizabeth Mansfield, a science policy analyst at FDA.
One example of a group taking that approach is the genetics laboratory at Emory University’s medical school. Led by Madhuri Hegde, the Emory lab has developed a microarray-based test for X-linked muscular dystrophy, called EmArray dystrophin.
Hegde began designing the EmArray dystrophin test last September. Since muscular dystrophy can stem from deletions and duplications as well as point mutations, she decided to base the test on both NimbleGen’s CGH and resequencing array technology.
The CGH chip Hegde made is highly dense — it covers the entire 2.2 megabase region that contains the dystrophin gene as well as some flanking regions — and has many overlapping areas. The resequencing chip contains the 79 exons, eight promoters, and five deep introns that have documented mutations related to muscular dystrophy. The data from the chips is then analyzed using David Cutler and Michael Zwick’s Abacus, an automated statistical method designed to analyze microarray hybridization data. From December to March, Hegde validated both sets of chips, finding them to have very high concordance and equal sensitivity for males and females. The EmArray for dystrophin came out at the end of May.
“We launched it in the clinical lab and we’ve done quite a few samples. We found some really nice things, actually,” Hegde says.
Tests On the Way
Agendia’s MammaPrint may have been the first in vitro diagnostic multivariate index assay to be approved under FDA’s draft guidance, but there are many more IVDMIAs waiting in the wings, including Pathwork Diagnostics’ Tissue of Origin Test, which is currently under review. Even more are on the threshold of being submitted, such as DiaGenic’s Alzheimer’s test and Harvard-Partners Center for Genetics and Genomics’ hypertrophic cardiomyopathy and hearing loss tests, all of which are currently undergoing validation. Some of these developers are joining forces with established array makers to bring their content to market — and to skirt side issues of developing and validating a new chip platform.
Pathwork Diagnostics designed its Tissue of Origin Test to determine, with a certain rate of accuracy, from what primary tissue a tumor started out. Then, they hope, oncologists will be able to use that information to give their patients cancer-specific treatment. To tell the tumor types apart, the test analyzes the tumor’s RNA and sieves through the hybridization intensity data to come to its conclusion. “The microarray was a great platform because you could look at so many pieces of information simultaneously in an effective way,” says Neff, who heads up Pathwork. “In fact, our test uses more than 1,600 genes, which would be hard to do with most other technologies and have it be cost- or time-effective. It’s a perfect platform when you really require a lot of information.”
To create the test, Pathwork Diagnostics teamed up with Affymetrix under a partnership model first used by Roche to develop the AmpliChip. Through the agreement, Affy’s partners identify content for the diagnostic test, develop the assay, apply the content to Affy’s microarray, and take the test through FDA and into the market, where they advertise the test under their own labeling, says Cris McReynolds, vice president of business development for molecular diagnostics at Affymetrix. Along the way, McReynolds says, Affymetrix provides support and, of course, the array and platform.
Affymetrix has slightly different agreements with academic centers, such as Harvard-Partners. “These are all research institutions that have a bias toward clinical application,” McReynolds says. Together, Affymetrix and its academic partners work out a research plan with clinical applications. “Basically, through a joint steering committee approval process, we could develop array-based chips or tests where they would waive the design fee for the project and we just pay for the chips,” says Heidi Rehm at Harvard.
Under the Affy-Harvard agreement, Rehm is developing a variety of diagnostic chips for genetic diseases, including one for hypertrophic cardiomyopathy and one for hearing loss. For these diseases, Rehm designed diagnostic chips that do both resequencing and genotyping. “That allows us to have increased sensitivity for reported mutations at the same time that we’re detecting novel mutations in the resequencing portion,” she says.
Rehm also restructured the probes on her chips so they are more sensitive, basing her improvements on Affymetrix’s SNP arrays. Her diagnostic chips are then analyzed by Affymetrix’s BRLMM software.
To validate the chips, Rehm created a pool of patients. “The challenge is when you’re dealing with an array-based technology that tests for potentially hundreds of thousands of different things — there’s no way to validate every possible outcome. So we did a slightly modified approach to increase our validation capabilities without running thousands of samples,” she says. Since Harvard has tested people for HCM for quite some time, it has a large bank of patient samples covering a number of different mutations. The pooling Rehm did represents the different exons with mutations from the entire patient population.
Now, Rehm is working on building an IT pipeline to create an algorithm to interpret the data from the chips. “If we have three pieces of data for the same base, which one is the right one?” she says. “We filter out false positive calls that have been made on every chip that we’ve ever run. We’re just going to ignore them. We’ve followed up on them 300 times and they are always negative.”
Harvard-Partners hopes that the HCM test will be ready to go in mid-fall. This past July, their agreement with Affymetrix expanded to a manufacturing partnership under which Affymetrix will provide custom-made arrays.
FDA Steps In
In one sense the approval of the AmpliChip has led the way for microarray-based diagnostics to leave the drawing board and move toward the market. But many of the newer diagnostic devices following in its wake are horses of a slightly different color; they are in vitro diagnostics, but they also are the trickier-to-approve multivariate index assays.
Medical devices are becoming more and more sophisticated and more tightly tied to particular therapeutics. “The diagnostics that are being developed, especially the molecular diagnostics, become ever more powerful tools to help the physicians select which kind of therapy for which patient,” says Agendia’s Bernards. “It was only logical to assume that sooner or later that the FDA would say, ‘Listen, these diagnostics really are not that different in terms of the way they impact patient management as compared to the drugs. So why should we then control drugs and not the diagnostics that accompany those drugs?’”
The FDA divvies medical devices into three categories based on a device’s intended use and the risk associated with it. “If it is a really complex microarray-based test that’s intended to tell you you have toenail fungus, we would consider that a very low-risk intended use even though it is on a very complex platform, and it would probably be class I,” says Mansfield at the FDA. “The technology doesn’t actually act very strongly in the risk assessment.”
Still, IVDMIAs worry the FDA. A year ago, FDA issued a draft guidance document concerning the approval process of IVDMIAs and published a newer version this past July with an expanded definition of what an IVDMIA is. In this guidance, an IVDMIA is any medical device that uses a score or index generated by a black-box process to classify a particular patient in a way that the end user — generally the physician — cannot verify. To date, Agendia’s MammaPrint is the only test approved under these draft guidances.
IVDMIAs differ from tests like the AmpliChip because they rely on an algorithm to analyze the data. Usually, this algorithm is derived from the analysis of complex datasets during which the developers decide which data points to include in future analyses. “The complexity that we’re actually worried about is not what’s coming out of it in a diagnostic sense. It’s the complexity of the dataset that was used to develop the algorithm,” Mansfield says.
Take MammaPrint as an example. The final, approved version of MammaPrint tests 70 genes to determine the prognosis of someone with breast cancer, classifying a patient as high or low risk for remission. Combining bioinformatics and statistical analysis, Agendia whittled down the genes included in the test to those that are most predictive for disease outcome. Of the 70 total, the functions of 20 are unknown. “Somehow these genes play an important role in breast cancer that we don’t understand yet, even though we can still use these genes as biomarkers of disease outcome,” says Bernards. “Microarray technology allows you to have what we call an unbiased approach, where you say, ‘Let’s assume that we are dumb and we don’t know which genes are important and let the biology of breast cancer tell us which genes are important.’” While that makes sense biologically, from a regulatory perspective, having genes of unknown function included in a diagnostic can be cause for concern.
Clearly, there are plenty of issues to work out — not least of which is clearing up remaining confusion about which array-based diagnostics fall under FDA’s jurisdiction. “It’s not clear to me whether the type of pipeline we’re developing to be able to interpret this data and report it is something that would end up making [our chip] fall into [the IVDMIA category],” says Rehm at Harvard.
But Who Will Pay?
Many of the microarray-based diagnostics under development started out because the developers thought their tests would be useful to doctors and clinicians, and eventually help to personalize treatment. But what is obvious to the developer or manufacturer may not be readily accepted by the insurance company that decides whether to cover the cost of the test.
“These tests show promise,” says Fiona Wilmot, Blue Shield of California’s medical director of policy, pharmacy, and therapeutics. “With limited evidence of effectiveness and outcomes available at this point, it is difficult to understand when, and for whom a test should be used.”
Neither Blue Shield of California nor Aetna covers Roche’s AmpliChip CYP450 test or Agendia’s MammaPrint. Both insurance companies, which serve a combined 19 million people, say that the tests fail to meet their criteria for medically necessary technologies or diagnostics, and label the tests as experimental and investigational.
To be covered by these companies, there must be conclusive scientific evidence in the literature that the technology not only improves the patient’s net health but is as beneficial as established alternatives and has worked outside of the developmental settings. “The most important point is that any technology must improve the net health outcome,” says Robert McDonough, Aetna’s senior medical director for clinical research and policy development.
According to Wilmot, it is on these last points where many technologies falter, including the AmpliChip and Mamma-Print. “There’s no evidence that they change the outcome,” she says. Both companies say that good prospective clinical trials are needed to give reliable evidence that these diagnostic tests will actually improve how patients’ treatment is managed. “We need evidence that the technology can either measure [or] alter the physiological changes related to disease,” McDonough says. “There should actually be evidence based on established medical facts that the measurement or alteration affects health outcome.”
Though Blue Shield of California and Aetna don’t cover microarray-based diagnostics, they still see them as playing a role in the personalization of medicine — provided all the clinical tests go well. “I do believe there is an important future for many different potential uses for them,” McDonough says. “There are so many ongoing clinical studies … that eventually these tests are going to find numerous potential uses and become part of our therapeutic armamentarium, but that would require the completion of these studies that are ongoing.”
If microarray-based diagnostics eventually provide the evidence that the insurance companies want, personalized medicine might leap closer to being more widely used in the clinic — as their developers have hoped all along.
“This is personalized medicine,” says Agendia’s Bernards about array-based diagnostics. “This basically means that you don’t treat breast cancer as a homogeneous disease, but you appreciate that each tumor is different and has a different response to a standard drug regimen and that you can use this technology to predict which of the breast cancer patients will and which will not respond.”
And In the Future
Despite a lukewarm reception by insurance companies, many people still see a bright future for microarrays as a diagnostic tool — especially in oncology. Though chips will compete with other technologies, especially sequencing, they are still carving out a role to play in the clinic.
“I think they have a tremendous potential. I think we’re going to see quite a bit coming out on the market in the next few years,” predicts Boulder’s Rowlen, who has begun her own biomedical instruments company called InDevR.
Many of the companies whose arrays are undergoing review or validation have even more microarray-based tests in the pipeline. DiaGenic, Agendia, and Pathwork Diagnostics all have cancer-related diagnostics in their infancy. “I think that cancer will be the major field and now, after FDA has approved the MammaPrint assay, I think that will be a trigger for more and more companies to put more efforts into developing cancer signatures,” says DiaGenic’s Christiansen, whose company is also making a breast cancer diagnostic to supplement mammograms.
Agendia has a development pipeline with about 50 products in it. “Our product closest to market is a similar test for colon cancer, which we call ColoPrint, we will introduce in Europe early next year,” Bernards says.
And Roche, whose AmpliChip CYP450 test started much of the hoopla, is still in the microarray field. It recently signed an agreement to buy NimbleGen, the Wisconsin-based company that makes DNA microarrays, and it is expanding its AmpliChip series. Roche is currently developing new tests for leukemia and lymphoma, as well as for the p53 tumor suppressor gene.
Pathwork Diagnostics’ Neff, whose company focuses on cancer, thinks microarrays will have a good run. But she acknowledges that they won’t be the perfect technology for every purpose. “It’s not the right platform for every test,” she says. The application is going to drive the technology, not vice versa, she adds.
One technology that may give microarrays a run for their money is sequencing, as the price of that platform falls precipitously. “Someday it may become cheaper just to sequence your whole gene, even though you know all the mutations that you might find. I think it is going to come full circle at some point,” says Harvard’s Rehm. “But we’re certainly not there today.”
For today, then, as microarrays begin to come of age and seek out their place in the diagnostic world, they offer potential for a new way to diagnose and treat a variety of diseases.
For researchers, that’s an exciting future. “I think especially in the clinical diagnostics world, this is going to revolutionize everything — the way people do it, how their labs are set up, how data gets looked at,” says Emory’s Hegde.
The Rise of the AmpliChip
Before Roche’s AmpliChip CYP450 test blazed the way for microarrays to have a role in the clinic, Roche encountered a major obstacle before even making it to market: the US regulatory system.
“It’s sort of a long and confusing story. It’s not because of anything particular about the Roche device; it’s the peculiarities of the regulatory system,” says Elizabeth Mansfield, a science policy analyst at the Food and Drug Administration.
At the time the genotyping AmpliChip test was launched in late June of 2003, the FDA was not regulating any microarrays on the market. So Roche marketed the AmpliChip as an analyte-specific reagent, or the main ingredient of an in-house lab test — a classification that does not require pre-market approval from FDA. FDA, however, had other ideas about the proper classification.
In July of 2003, Steven Gutman, the director of the Office of In Vitro Diagnostic Device Evaluation and Safety, sent a letter to Roche Molecular Diagnostics’ general manager, inviting Roche to discuss AmpliChip’s classification, which FDA believed was better suited as a class II or III medical device — categories that require the manufacturer to go through FDA before marketing the device.
Following a few back and forths between FDA and Roche, the AmpliChip was finally catalogued as a high-risk, class III device. “The hook was that because we had never reviewed a device with that same intended use, our law says it is automatically class III,” Mansfield says. Class III devices are usually complex and either have a direct impact on supporting or sustaining a patient’s life, or confer a risk of illness or injury.
After reviewing Roche’s materials, though, the FDA down-classified the AmpliChip test. “The submission when it came to us was in fact a 510(k) … based on our assessment that this is really a moderate-risk device,” Mansfield says. According to Walter Koch, the vice president and head of research at Roche Molecular Diagnostics, such reclassifications are not unusual for products based on novel technologies as both the FDA and the manufacturer work out the kinks of an untested system.
FDA approved the AmpliChip in December of 2004 as a class II device, making way for more tests to follow in its wake.
— CC
Making Sense of the Jargon
Class I medical devices: These simple devices are considered low risk and are exempt from premarket review.
Class II medical devices: These devices are more complex. They are considered moderate risk and require a 510(k) submission prior to marketing.
Class III medical devices: These devices are highly complex and highly regulated devices that require premarket approval from FDA. Often, they support or sustain life, or include a risk of illness or injury.
510(k): Manufacturers of class II medical devices submit this premarket notification that compares a new device to one already on the market to show that the new device is nearly the same as what’s currently available.
In Vitro Diagnostic Multivariate Index Assay (IVDMIA): The interpretation of these tests is based on an algorithm in such a way that the end user cannot verify it. This algorithm is often made from the analysis of correlations between multivariate data and clinical outcome.
Clinical Laboratory Improvement Amendment (CLIA): Congress passed this act in 1988 to establish quality standards for laboratory testing.
Laboratory-developed or home-brew tests: These are tests developed by a clinical laboratory for use only in that laboratory.
— CC
Microarrays: A Definitive Diagnostic for Alzheimer’s?
For some diseases, it is especially difficult to give a clear diagnosis. For Alzheimer’s disease, which affects 18 million people worldwide and is predicted to grow to 34 million by 2025, the only definitive diagnosis comes from a post-mortem look at the person’s brain. But two companies, DiaGenic and ExonHit Therapeutics, are pinning their hopes on the microarray to diagnose Alzheimer’s disease in its early stages. To do this, they are using patients’ blood to look into what is going on in their brains. But confirming that what they see in blood really reflects what is going on in the brain might be difficult.
“What we have seen and what you now find in the literature is an increased interest in the use of blood as a clinical sample linked to different neurological disorders because there you don’t have the option of taking a biopsy,” says Dag Christiansen, vice president of marketing at DiaGenic.
The theory is that changes wrought in the brain can be detected through subtle gene expression changes observable in blood. Both DiaGenic and ExonHit Therapeutics have uncovered gene expression signatures associated with Alzheimer’s disease. DiaGenic tested its signature and was able to distinguish people with Alzheimer’s disease from those with Parkinson’s disease. ExonHit, however, focused on alternative gene splicing and found a signature based on 1,124 genes and their splice variants. They, too, could distinguish people with Alzheimer’s disease from those without it. DiaGenic hopes to have the CE mark for its test at the end of the year and then follow up with FDA approval, while ExonHit hopes to seek approval for its test in 2009.
In these studies, though, the people with Alzheimer’s were alive and therefore diagnosed through mental state examinations rather than definitive brain biopsies. Full confirmation that people had Alzheimer’s and not another form of dementia will only come from longitudinal studies that follow patients to their deaths. “That, I think, is a challenge for all companies developing diagnostics for early detection of Alzheimer’s,” says Christiansen.
— CC