At A Glance
1993 — PhD, biology, University of Oregon
1986 — BA, biology, University of California, Santa Barbara
Postdoctoral Fellowships — Arthritis Foundation (1996), Cancer Research Institute (1993-1996)
2001-Present — Project manager, Biomedical Assays Group, Molecular Diagnostics, Agilent Technologies
1996-2001 — Research scientist, Chemical and Biological Systems, Hewlett Packard, and Agilent Technologies.
Laurakay Bruhn, 39, has been involved in the development of Agilent Technologies’ DNA microarray platform for longer than the company has been around. Previously, Bruhn worked in Hewlett-Packard’s Chemical and Biological Systems Department and joined Agilent when the company spun off from HP in 1999.
Since 2001, she has been a project manager for Agilent Laboratories, the company’s Palo Alto, Calif., central research organization, which is separate from the varied product research and development efforts in each of the company’s business groups.
As part of her work, Bruhn has been involved in a microarray-based research collaboration, now in its third year, with Stanford University’s Donald Reynolds Cardiovascular Clinical Research Center. The study is seeking to identify genes linked to heart disease, and the molecular pathways for the primary disease process in the blood vessel wall.
For Agilent, this research effort has been running in parallel with microarray research conducted by the Netherlands Cancer Institute and Rosetta Inpharmatics, two organizations that utilize the Agilent DNA microarray platform. That effort is leading to a large-scale clinical study in Europe that will start soon to evaluate a breast-cancer molecular signature.
Bruhn recently spoke with BioArray News on Agilent’s research and development efforts with microarrays and the company’s progress in pushing this tool towards the clinical marketplace.
Agilent’s collaboration with Agendia and the Netherlands Cancer Institute’s work appears to be leading edge and one of the places where a beachhead into the clinical environment appears possible. Do you think microarray-based cardiovascular research might be able to hit the kind of on ramp that we are seeing in breast cancer?
I think we are a bit farther out for cardiovascular disease than for cancer.
From the beginning of the availability of the technology, the cancer community has embraced DNA microarrays ambitiously and forcefully. If you look at the literature, something on the order of 40 percent of papers that use array technology are from cancer researchers. Cancer research using arrays is advanced in part because of practical things like sample availability: It’s pretty easy to get a cancer patient to give you a piece of a tumor — because they don’t want any of it. In cardiovascular disease, one of the biggest hurdles is that you generally have small sample sets, and there can be quite striking differences between the patients.
On the cardiovascular disease side, the diagnostic tools for cardiovascular-related diseases are quite insufficient. For about a third of patients that have cardiovascular disease, the symptom that they present with is that they are dead. So, there is great motivation to develop better diagnostic tools. There are such a large number of people that have this type of disease that I think that will motivate people to be more and more creative as to how to use these multiplex technologies to try to come up with diagnostics. Diseases like cardiovascular disease are so heterogeneous and affected by genetics and environment, that this is just a natural thing that will happen.
Would you tell me a little about your work?
I’m a project manager in Agilent Laboratories, the central research organization for Agilent Technologies. [The company] has substantial R&D organizations attached to the product-making division but what’s different between Agilent Labs and those organizations is that we are looking towards the future, and we do research that has a longer time line in terms of how far away it might be from leading to a specific product.
The group that I am the project manager for is part of the Molecular Diagnostics department of Agilent Laboratories, and it is made up of about half computational biologists and half molecular biologists. Our group has historically been involved quite deeply in developing the DNA microarray platform that Agilent now sells commercially for research. Our charge has been to look for new applications and new markets. Molecular diagnostics would be a new market for Agilent. We don’t necessarily limit our research to the array platform although we are generally focused on multiplex molecular profiling. The overarching theme is basically to look at opportunities, in terms of the diagnostics marketplace, where doing multiplexed assays — you know, more than one protein, or more than one RNA at a time - to find groups of genes, called signatures or profiles, that would correlate with a specific disease state, for example, correlate with the propensity to respond to specific drugs.
We do a lot of our research with external collaborators like the Reynolds Center. In terms of disease types, we focus mostly on cancer and cardiovascular disease at this point. The interesting thing about both of those, and in particular, cardiovascular disease, is there are a lot of different systemic processes going on in relation to inflammation and response to inflammatory stimuli. Some of the things we learn may have interesting ramifications for other types of disease like autoimmune disease and things like that, even though that is not the primary target at this point.
You have been involved in the development of Agilent’s microarrays for more than five years now. What do you think was done well in developing this tool?
It was recognized early on that these types of devices would be generating data that your average biologist would be unable to deal with. One thing that was done right was to have groups like ours, which are half computational biologists and half molecular biologists, right from the beginning. One of the things that has been quite powerful with regards to the Reynolds collaboration is the ability to make customized arrays. For example, when this work started about three years ago, there were no genome-wide chips available and Tom Quertermous [director of the division of cardiovascular medicine at Reynolds Center] said his group wanted to make sure that they didn’t miss genes that were expressed in the vessel wall in the various studies that they were planning. So they went after developing collections of such genes and because Agilent has the capability of printing custom microarrays, we were able to do that for them, whereas if they had to take what was generically available off the shelf at that time, they would have missed a lot of the genes that they wanted to see.
Now that we have [the whole human genome array], the research with Quertermous’s group, which started with custom cDNA arrays, has moved to custom oligo arrays. We developed an array for this before the whole human genome array came out, using a custom oligo array based on one of Agilent’s catalog products, with some extra genes that Quertermous’s group was really interested in.
How do you integrate the data that came off the cDNA arrays with that of the oligo arrays?
We have a variety of ways. At a brute force level, you can Blast the oligos against the cDNAs and develop a match that way, or by correlating the cDNAs and oligos with the genes they represent. You can sync up the data up to a point, but for some genes, because the cDNA probes are detecting a longer amount of sequence, you don’t necessarily expect the results to agree.
There are some cases where the cDNA would be predicted to be detecting multiple splice variants and the oligo probe would be predicted to detect only one or a subset of the splice variants where you would expect — depending on the tissues you are looking at, and which assortment of those splice variants are expressed - that you wouldn’t necessarily be able to sync up the data. When we switched over, we divided by project and we didn’t mix the arrays. There is one set of samples that was analyzed on the cDNA arrays that is being reanalyzed on the oligo arrays.
We participated in a pilot study where we had different brain tumor samples and we assayed them simultaneously on cDNA and in-situ oligo arrays. If you look at an infogram of the gene list that most distinguished those samples, the picture is very, very similar. So the biology that the cDNA arrays and the oligo arrays are telling you are — were telling in that case — exactly the same. But if you look on the gene-by-gene level, you have to be very careful, and assess whether you would actually expect the results to be the same, based on the sequence of the cDNA and the sequence of the oligo.
We talk about data because it is a consistent theme we hear from researchers. They say they want tools that help them understand what is coming from the arrays and to see beyond simple lists of genes.
We are doing that from a number of different directions. One is, for example, the Gene Ontology (GO) annotation that is out there and growing daily. Some of the tools that we have developed basically utilize data like that, and it could be gene ontology annotation, it could be pathway annotation. So say you do a comparison of two different types of samples, coronary arteries from patients that have atherosclerosis, or those that don’t. And you want to try to find out what genes are differentially regulated between those two sample sets. You do statistical analysis to come up with a set of genes that most distinguish those two types of samples. You get a list of genes that, typically, includes hundreds or thousands of genes. There is a huge need to make sense of what those genes mean. Some of the tools that we have developed take, for example, the Gene Ontology annotation, and statistically analyze whether there is an overrepresentation of genes associated with a pathway, or cellular function, within that set of genes that distinguishes several samples that have atherosclerosis and ones that don’t. So instead of looking through a list of genes, researchers are now looking at pathways, and then they can go back to what genes are correlated with those. That is a big focus now.
This is something that had been on our radar screen for quite some time, but was stimulated a couple of years ago during one of our regular Tuesday meetings with the Quertermous group. [Tom] had spent the weekend with Photoshop basically color-coding — by hand — the different genes in this one gene list we were looking at in terms of their associations, whether they were related to cytokines, or cytokine responses. A few weeks after that, our computational biologists developed an automated tool to do that, producing a very similar visualization with the push of a button.
I see mention of a prototype protein array used in the Reynolds Center collaboration. How is Agilent approaching protein microarrays?
Agilent has developed two different types of technologies for making DNA microarrays. One is a deposition system that deposits material made offline through inkjet heads onto glass substrates. The other is in situ synthesized oligonucleotide arrays. The DNA microarrays that Agilent sells now are exclusively made by in situ synthesis. There is a project here that uses the deposition technology to make protein arrays. We are exploring from the research side, basically how well protein arrays can work, and, importantly, what type of biological questions they may be useful for, and to get a sense of what the market would look like.
The prototype protein array [created in conjunction with the Quertermous group] is an antibody array to look at serum samples to see whether there might be protein profiles in the serum that correlate with a propensity for heart disease.
Part of the antibody collection that ended up on the chip was developed by looking at some of the expression profiling studies that were done on patients with different types of heart disease. We looked for transcripts that seem to have interesting biological correlations with different types of heart disease and looked specifically at the ones that are expressing proteins that might go out into the serum and be detectable there. To have early screening types of assays, people aren’t likely going to give you pieces of their vessel wall. But people are pretty willing to give serum. We are using these prototype antibody arrays on a series of serum from patient samples from Stanford to try to assess their utility and functionality from the technology side, and their utility from the answering-interesting-biological-questions side, with an eye toward looking at biological questions that might specifically lead a researcher to diagnostic assays.
The idea here is that you are going to gain more useful data by looking at a panel of proteins, although you may in some cases filter to a one-protein assay at some point. I guess my mind says a lot of those that exist [one protein assays] have already been found, and the idea for protein arrays is to look at signatures and panels of different proteins simultaneously to see correlations with a propensity to have different types of heart disease, or outcomes when you have different types of heart disease.
Let’s look to the future now. What might microarrays look like five years from now?
In the clinical perspective, one of the trends is to have smaller, more focused arrays. Agendia, for example, is looking at using our 8-pack format for their clinical trials. There is an effort here in Agilent Labs to develop not just higher throughput but more robust platforms, more hands-off types of technology that can be as reproducible as possible.
[There is a question about] whether this technology will continue to be something that is used in a reference lab as opposed to individual hospital clinics. In the reference lab setting, they are going to want a higher throughput type of instrumentation. If it does get dispersed into more localized hospital clinics, there will be less patient throughput needed, but still a need for robust performance. I would speculate that it will definitely start in the reference lab, and, Agendia, if nothing else, demonstrates that. The timing — be it in five years, beyond five years, or maybe never — of distributing the technology into the hospital clinical labs is unclear at this point. There are tests that started out in the reference labs, went out in the clinical labs that were attached to hospitals that now have gone back to reference labs because it is more efficient and cost effective. The cost is always going to be a big key. Whatever is the cheapest way to deliver good answers to patients is what will win.
Is there a price point for arrays that will unlock this?
We have examined that question. Things like Myriad’s BRCA test may seem expensive. Compared to some of the standard clinical chemistry things that are done, there is an order of magnitude difference in the pricing. It is a reimbursable test because it brings a lot of value to those patients. It happens to be a fairly expensive technology, but there is nothing that does what they do right now. I don’t see a magic price point. There is going to be a combination of price related to value and whether, overall, the whole healthcare system ends up saving money and lives. So, the individual tests themselves could seem more or less expensive. But what is going be examined is treating that patient over time. So it’s spend X amount of money on a test, but save an even greater amount of money because you give them the right treatment the first time. It is going to be worthwhile across the healthcare system and I think the insurance people will realize this.
Perhaps the greatest hurdle to getting into the clinical market are the regulatory agencies. How do you see this playing out?
I can’t talk in great detail, but I would say overall that we are actively engaged with a number of groups in the US and Europe to try to better understand the various regulations that apply.
Agilent is very actively assessing opportunities in the molecular diagnostics market, from what-is-this-going-to-do-from-a-patient’s-point-of-view, as well as what are the sort of pathways that are efficient avenues to getting this type of technology adopted in a clinical lab setting. I can’t comment too specifically on the current view, but Agilent continues to analyze the regulatory rules and regulations that may apply when researchers take this type of technology into the clinic.
Agendia is a spinoff company from the Netherlands Cancer Institute and they are gearing up to use our platform in Europe to do a series of large-scale clinical studies of a breast cancer signature. The regulatory environment is different in Europe and the US. So, through Agendia we are basically accessing the situation in a very detailed, practical way. And in the US, we are working with a few different groups who have interest in using our expression platform in clinical diagnosis.
We have been involved with the FDA in different contexts, and we are quite aware that they are soliciting input on this general realm of how things like DNA microarrays might play in a clinical diagnostics setting. We don’t have final answers about just how this will be done. I think we, like the rest of the world, are convinced that there is such a value for patients in many types of multiplex molecular technologies to apply in a clinical setting, that it will happen — it is just a matter of figuring out the details of how and when it will happen.