Center for Drug Discovery
At A Glance
Name: Donald Lo
Position: Director, Center for Drug Discovery; associate professor, Department of Neurobiology, Duke University Medical Center
Background: Chief scientific officer, Cogent Neuroscience, 1998-2002; assistant professor, Duke University, 1992-1997; research sssociate, Ludwig Institute for Cancer Research, University College London, 1989-1992; PhD, Yale University, 1989.
Now that high-throughput and high-content imaging techniques have helped popularize cell-based assays for drug discovery, many scientists wonder whether the same techniques can be used with more complex samples, such as 3D cell cultures, living tissue slices, or even whole organisms like nematodes and zebrafish.
One such research group exploring these applications is led by Donald Lo at the Duke University Medical School's Center for Drug Discovery. Lo and colleagues' main interest is in drug discovery for neurodegenerative diseases, and they have found that implementing high-content screens using live explant tissue from rat brains may provide more relevant data than screening out-of-context individual cells.
One of Lo's former lab members, former pharmaceutical scientist Joseph Trask, helped employ high-content screening at the Center for Drug Discovery, and will be giving a talk about his former group's work at next week's High-Content Analysis conference in San Francisco. Lo's group will continue the work as part of its neurodegenerative drug discovery program, and Lo took a few moments this week to discuss with CBA News the promise and pitfalls associated with using HCS to screen complex biological compositions.
Tell me about the Duke Center for Drug Discovery. It's not yet listed in Duke's online directory.
It's not in any of the directories because it's pretty new. We only established the Center for Drug Discovery in the summer of 2004, and we thought that we would have a quiet period, and get a head of steam going before we did the standard splashy website. We're housed in a former biotech building space where several Duke laboratories have moved to take advantage of the open architecture.
A common theme of the labs that are up here are larger-scale, more screening-based projects. The Center for Drug Discovery is actually fairly discreet. We have about 15 staff members. We happen to be focused on neurological disorders just because of historical and personal interests thus far. Specifically, we are focused on neurogenerative diseases like Huntington's, Alzheimer's, and stroke. We have the view that it really wasn't that meaningful for a fundamentally academic outfit to try to replicate the kind of scale and screening power that already exists in biotech and pharma, even to the extent of what we now call mainstream high-content screening, which is being done so well in the private sector. We thought we would focus our efforts more on leading edge technologies that are not really ready for prime time, and are maybe a little too risky for biopharma. But these are things that an academic lab that is really dedicated to true translational medicine could tackle. We hit on this idea that there was a really growing need in the neurological space for what you might call ultra-high-content analysis, given that there are precious few, if any, validated targets for neurodegenerative disease and stroke.
So this particular project of quantifying neuronal health and viability in live brain tissue explants — how did you get into this work?
It was meant to be shorter term in terms of the technological addition to the lab. For many years now, we've been working on the biological side of this brain-slice-based high-content screen. Given that we don't have truly validated targets for diseases like Huntington's, it means that having an in vitro and even a cell-based assay has more speculative content than you might like. We thought that one way to get around this might be to do single cell-based assays, but to have those cells still within essentially living slabs of tissue. The slices we're using for explants are fairly thick — anywhere between 300 and 400 microns thick. We only look at the healthy neurons that are right in the center of the cell. They're all image-based cell-based assays, where we're only looking at a subset of sentinel neurons, as it were, in each of these brain slice assays.
Historically, we thought it was hard enough to develop all the biological side of things — to be able to scale up to cut tens of thousands of these brain slice explants per year in order to get some meaningful throughput. But for years we've been doing the assay reads manually, having actual people just looking down fluorescent microscopes and scoring neurons one at a time. It works, because people can get really fast at that. And if you take your assay endpoints where the basal read level is actually a relatively small number of neurons, having even a small team of three or four scientists working on this, we can already get something like 5,000 or 10,000 genes or compound screens per year. While that's not a huge number in terms of high-throughput screening, our idea is that we're kind of bridging traditional in vitro or high-content screens, and in vivo animal models.
But you have incorporated a high-content screening system into the project, right?
Yeah, after several years of doing this and really running the risk of burning out our scientists, we thought it was really time to automate. Many years ago, when we started this, there were few, if any, turnkey systems on the market. We had looked at putting together some pieces of very optical and image-analysis systems, and it didn't really work that well. As of last year, we thought that the industry had clearly made huge strides, and we were fortunate to have Joe Trask join our team for the better part of a year, and really, his main task was to educate us about automation and image analysis, and really help us select the type of technology that would be most amenable to adapting to a brain slice system. These image-analysis systems are generally designed for cell culture systems.
And you selected the Cellomics ArrayScan VTI reader?
That instrument, although priced competitively, is still quite an investment for an academic lab. Was it difficult to justify the expense for your lab?
We were very fortunate. Particularly for an academic lab, this kind of purchase would generally be a university-wide purchase, and we would have to get a special grant. But we were very lucky to work with an organization called the Cure Huntington's Disease initiative. This group is essentially a virtual biopharma company, dedicated to fast-tracking new candidates for treating Huntington's. They support a lot of our operations, and more importantly, have a real biopharma mindset in terms of what it takes to move a drug-development program forward. They were very generous in providing funding for this instrument for us.
What are some of the difficulties involved with adapting the HCS reader to whole tissue slices?
There are a few fundamental issues that are still really challenging. Probably the very first one, which seems simple, was focus. Most of the automated HCS systems out there are built around the assumption that the object to be imaged is either on or very near the bottom surface of the well. Because our brain slices are not only several hundred microns thick, they have to be suspended from the bottom of the cell culture well to allow the medium to flow underneath the slice. We're really talking about an object plain that is at least a millimeter or two off the bottom of the dish, and the variation in the Z-axis is easily several hundred microns from well to well.
So first off, I think the major challenge was having an instrument that had an image-based focusing strategy that was fast enough and robust enough to do hundreds of thousands of such wells per day. A second major issue was image clarity. The brain slice specimen itself has a lot of light-scattering properties, probably because there are so many nerve processes running through the tissue, and only a minority of the volume is actually composed of cells. One of the key issues, then, was how much light throughput and image clarity we could get from the standard inverted optics of such screening systems.
We actually have a plea for the HCS industry to think about making an instrument with upright optics. For our system, and for whole-organism high-content screening — such as with worms and fish — I think a real technological advance would be if we could image from above. There are no instruments out there that we know of that have upright optics. Something we're going to work on over the next year is to see if we can jury-rig something where we can sort of fool the [ArrayScan] into looking through the top by essentially mounting some stereoscopic optical elements on top of the machine. But I'm on a solo campaign to convince manufacturers to have an optional configuration with upright optics. I think it would open up these turn-key systems to a lot of other screening methods.
Have you had to rig the instrument at all to better accommodate the incubation of live tissue slices?
We've not had to go there yet, because our culture system is actually fairly stable for a few hours outside of the incubator, especially if we keep the lids on. The ArrayScan VTI is perfectly fine as it is. If we did any kind of long time-course experiments on single plates, we'd probably have to use some kind of incubated system.
How about image analysis? What types of algorithms are you using? What are you looking for in these tissue slices?
We really have a couple of categories. One is a relatively simple higher-throughput first-pass analysis. What we really need to distinguish are neuron-like objects at fairly low magnification. In fact, we've been somewhat surprised that the lower-end magnifications have worked quite well for us with this platform. For a more sophisticated analysis, which is what the push will be over the next year, we'll look for secondary morphological features that tell us something about the health and wellness of each of these neurons in the brain slice. It turns out that one of the most sensitive indicators of that is the state of the dendritic tree of the neuron. We're at a real advantage, because a number of the HCS systems out there already have some kind of version of a neurite outgrowth or process outgrowth quantification algorithm. Those have proven to be difficult to apply straight out to our system, mainly because of image clarity. By the time you have a millimeter or two of substance between the bottom of the well and the object plane, and the fact that there's a lot of light scattering, the actual imaging and resolution of the dendrites of single neurons is challenging. I think it's going to be perhaps more of a hardware-based challenge in the beginning to be able to get a really clear picture. The algorithms that have already been worked out will probably then work very well for us.
How does this type of work translate into the higher-throughput world of industrial pharmaceutical discovery?
About half of our operation is based on straight compound screening, and the other half is developing tools for target-based screens. We have a growing number of large-scale DNA-based and siRNA-based target screens, looking at targets from proteomic screens for neurological diseases like Huntington's. Our goal is to be able to use these DNA-based strategies to help provide biological validation for the targets that the compounds act on. Our view is not that a hit or hit candidate from our screen would be ready to go to the next stage of drug development; rather, it would help point us to a target or pathway that we could give biological validation to in a system that has, we hope, almost as much content as a whole animal system.