High-Throughput Bioscience Center, Dept. of Molecular Pharmacology, Stanford University School of Medicine
At A Glance
Name: David Solow-Cordero
Position: Associate Director, High-Throughput Bioscience Center, Dept. of Molecular Pharmacology, Stanford University School of Medicine
Background: Principal scientist, high-throughput screening and informatics, Ceretek — 1999-2003
Scientist, enzymology, FibroGen — 1995-1999
PhD, Dept. of Molecular and Cellular Pharmacology, University of California at Berkeley, 1995
BS, biology, Massachusetts Institute of Technology, 1990
As associate director of the Stanford University School of Medicine's High-Throughput Bioscience Center, David-Solow Cordero is responsible for the selection, programming, and maintenance of a wide variety of both high-throughput and high-content screening instrumentation, as well as for managing research collaborations with Stanford faculty and students. Prior to his appointment with Stanford, Solow-Cordero organized several screening campaigns at biotech firms, and believes there are some key differences between the screening cultures in industry and academia. Solow-Cordero will be giving a presentation about the use of HTS and HCS in the academic environment April 22 at the Society for Biomolecular Screening's West Coast Regional meeting in San Francisco. Last week, he took a few moments to provide Cell-Based Assay News with a preview of his talk and some thoughts on the National Institutes of Health's proposed Molecular Libraries Screening Centers Network.
How did the Stanford High-Throughput Bioscience Center get started?
It started essentially when James Chen, who is the director of the facility and an assistant professor here at Stanford, was hired. One of the things that he wanted to bring to Stanford was a screening group. So he was put in charge of starting this group up. He had done his graduate work in Stuart Schreiber's lab at Harvard, and actually left around the time that Stuart started Harvard's screening group, so he missed out on it. But he did help start up a facility during his post doc at Johns Hopkins, and did run a small screen there. And as part of expanding on his work, which is looking at the hedgehog pathway in zebrafish, he wanted to have the resource of a screening center here.
How did you become involved in it?
They put an ad in, and at the time, I was finishing up with a small biotech that didn't make it, so it was really perfect timing for me. I had previously set up a screening group at FibroGen, which is still around in South San Francisco. I was there for four years, and then I started at a small company called Ceretek, and essentially set up their screening group and ran that for four years. Scientifically, we were very successful, but it was just a lousy business market at the time, so it was very difficult for a small company to find funding.
So your background is primarily in industry, so what are the major differences you perceive so far between industry and academia when it comes to screening?
Probably the biggest difference, in a sense, is that in a company you're brought in, and you know exactly what it works on, what your targets are going to be, and what types of assays you're going to be running. Obviously these change over time, but they're usually done as a concerted effort. Here, because I'm serving the entire Stanford community, people will walk in and say that they want to run a certain type of assay, and we may start that a week later. So the flexibility required of us is absolutely tremendous compared to even a small biotech, where you're focused, and you may be doing GPCR screening, and obviously larger companies are going to have every type of screen. But we have one facility, one workstation that we need to run every type of screen on, so we need as much flexibility as possible. And that also involves compound libraries — we want them to be as diverse as possible, so that no matter what target people bring in, we have a higher chance of finding leads for it.
Your lab uses both high-throughput and high-content platforms. High-content implies cell-based assays, and high-throughput often implies biochemical assays. How do you draw a distinction?
I almost [think] the distinction between high-content and high-throughput [is that] high-throughput involves both cell-based and protein-based assays. Actually, of the first four high-throughput screens we've run, three of them are cell-based. And these are not high-content screens, but actually [involve] reporter genes where you're using live cells. And actually, as part of the system that I designed, I installed a robotic incubator so we can run fully-automated cell-based screens where the read out is something like luciferase or a fluorescent signal. So we're able to run what are essentially fully automated cell-based primary screens where our throughput is as good as it is for enzyme-based screens.
So it's the same type of platform that would be used in biochemical assays, but using cells as another type of reagent?
How about high-content platforms?
The way that's working right now is as a stand-alone instrument, and we will have the ability to automate that further so we can do multiple plates. It's a lower throughput — we have several labs that are looking at RNAi screens, and one lab has about 2,200 RNAi's that they're systematically knocking down, and then doing the imaging, the high-content screen, to look a these cells. And they might be looking at an endpoint like cell morphology, or apoptosis signals, or that type of thing. And we actually have people doing more elaborate screens, where they're looking at a single plate over maybe 12 hours, and they might be studying something like mitosis, or neurite outgrowth. So we have that instrument, and we actually have three on campus — one that I'm responsible for, one that I'm half responsible for, and one that I'm a minor consultant for.
Is it the same instrument platform?
It's the Axon ImageXpress [now owned by Molecular Devices]. There are three that I know of on campus. One thing that for the future is we just applied for a grant to get a more elaborate machine — a high-throughput confocal microscope, which is similar to the ImageXpress except you have the confocal aspect. And there's another difference between us and industry — when we need new equipment, we apply for grants, and the problem with that is you apply in March, you're notified maybe in November, and you can buy it in April of the next year. The majority of our high-throughput equipment we got that way. We applied for a grant in February or March of 2003, and I was able to purchase all the equipment essentially in April 2004. So the majority of the equipment we have in this fully automated system was paid for by an NIH grant.
Have you evaluated specific confocal readers?
We wrote the grant with one specific machine in mind, but we're completely open-minded as to which one we would get. Actually, the grant for the high-throughput machine, which was done before I was hired, specified two particular brands, and I went with two completely different ones — mainly because I got a lot more for the same price.
What platforms were those?
For the robotic system we went with Caliper Life Sciences, which at the time was Zymark. And we went with the Sciclone ALH3000, that's a liquid handler. And we also integrated that with the Caliper Life Sciences Twister II. Part of the reason we went with Caliper was because they built the robotics, software, and liquid handler, so everything would be done by one company. There are other single companies that do that, but I felt we got a tremendous amount of functionality for a lower price.
What types of promising technologies do you see on the horizon?
There's the one we talked about — the high-throughput confocal microscope. Having one of those would really give us high-quality data. With a lot of these things, the technology is so new that people in academia don't really know about it, because they don't go to the same trade shows that industry people do, where they're showing off half-million to million-dollar machines. A couple of other instruments that are appealing are things like the Molecular Devices FLIPR Tetra; I have the FlexStation, which is similar to the FLIPR in that it can do kinetic screens, but with the Tetra you get a lot more throughput. And I've actually run one of those in industry before, the previous version. The other one is again a Molecular Devices product — because they buy up all the competition — is the high-throughput patch-clamp machine, either the IonWorks or the PatchXpress. Having something like [would be] far and beyond what anyone does here at Stanford. A big part of getting an instrument like that is educating people that not only do we have it, but this is what you can use it for.
Does the Stanford lab have any aspirations to be part of the NIH Molecular Libraries Screening Centers Network?
We actually applied, and there were a couple of issues we had with the application that I'm sure most academic labs did. One is that you get money from the NIH, but you really don't have any control over how it's spent, in a sense. You're assigned projects, and these could come from anywhere, so you don't really know what you're going to be doing. And then, how does it benefit Stanford, getting this grant?
That was going to be my next question …
Well, we actually applied jointly with the [University of California, San Francisco] and SRI, a nearby research institute in Menlo Park. Essentially we said, 'We'll each do a third,' and go from there.
The NIH has said they would be deciding those any day now — do you have an idea?
We know where we stand — I hear things through the grapevine, and that says we probably won't be mentioned as one of those sites. It pays for half of the staff, and you might be able to get equipment out of it, but really the money wasn't that great, considering. I can't imagine any Stanford researcher applying for these grants because of the patent [issue] and the public disclosure of the data. Harvard right now, I think they say that anyone who runs a screen at its facility, their data will be made public within two years, which is more than enough time for a researcher to go follow up on leads and file patents. I think the latest from the NIH was something like 60 days, which is impossible. You've run a screen, and 60 days later you might still be analyzing your data, or retesting your compounds.