Skip to main content
Premium Trial:

Request an Annual Quote

TGen s David Evans Discusses High-Throughput siRNA Transfection and HCS

Premium
David Evans
Head of drug discovery
Translational Genomics Research Institute

At A Glance

Name: David Evans

Position: Head of drug discovery, Translational Genomics Research Institute, Gaithersburg, Md.

Background: Director, drug discovery, Psychiatric Genomics, Gaithersburg; Various positions, Serono Pharmaceuticals, Millennium Pharmaceuticals, PerSeptive Biosystems; BSc and PhD, biochemistry, Imperial College, London.


Having worked in the drug discovery industry for more than 15 years, David Evans was a natural fit when he was named head of drug discovery for the Translational Genomics Research Institute in 2003. At TGen, he is primarily responsible for building the infrastructure necessary for the institute's high-throughput chemical genomics approach. In particular, Evans has adopted high-throughput siRNA gene-silencing technology and high-content cell-based assays as a way to discover and validate therapeutic targets as part of TGen's cancer research initiative.

Evans, who will be giving a presentation on this subject at November's Pharma IQ High-Content Screening conference in London, took a few moments last week to discuss his work with CBA News.

How involved is TGen in the drug-discovery process?

That's the whole rationale behind TGen, which was founded by Jeff Trent when he rolled out of the National Human Genome Research Institute. The rationale was that there were a lot of papers being published at NIH, but not any translational efforts going beyond that. You'd publish a great paper in Science or Nature, and that didn't lead to someone getting better in the clinic. Jeff really wanted to build something that would allow people to translate the genomics revolution into new and revolutionary medicines.

They obviously had a lot of experience with array-based technologies. TGen has centers of excellence for both Agilent and Affymetrix arrays. We also are very fortunate to have Dick Love, who was one of the co-founders, and Daniel Von Hoff, who is our vice president of clinical development — these guys were founders of Ilex Oncology, so they have a strong business background. They founded Ilex and sold it for $500 million last year sometime. Dan is an excellent clinician — he's done hundreds of clinical trials and has had many successful drugs approved by the FDA; and I think Dick Love has also had one of the largest number of drugs approved of anyone alive. The plan for TGen was that it would be a non-profit organization that would go after the new and novel targets, identify new drugs, move those into the clinic, and make better medicines for patients.

So that's TGen in a nutshell. Our group is focused on high-throughput siRNA transfection, and we're focused on oncology. TGen has a number of therapeutic areas of interest, with CNS, diabetes, and oncology likely being the top ones. We are using several tools to address our oncology research. One is obviously the cell models themselves. Another is high-throughput siRNA, and we've automated the process, we spend a lot of time in assay development, we can do up to 30,000 transfections per day with each of our robotics platforms, and we're replicating those platforms as we move forward. We can do end-point assays, so we can look at siRNA knock-down of gene targets in conjunction with a co-treatment, and we can find new ways to kill the cells. In addition, if we get a very slight effect, or we want to validate this, we can use high-content screening. We have a collaboration with GE Healthcare where we use the IN Cell 1000 to basically go after that. That's where we stand right now. We have been successful in obtaining some pharmaceutical contracts, and we look at these as a way to grow TGen, and also to be able to provide to those pharmas the ability to do the high-throughput transfections and hopefully leverage that in terms of improving their therapeutics or identifying new targets for therapeutic intervention.

How do you physically achieve a throughput of 30,000 siRNA transfections per day? Does this include the time it takes to conduct analyses?

At the moment we are using the Qiagen druggable genome library, which is 10,000 siRNAs — 2 siRNAs per target. So if we do our knockdown experiments in duplicate, that's 20,000 transfections per day. We're typically doing that within an eight-hour day. When I say 30,000 at least, that assumes that we let the robot run overnight, as well, and we've been doing that. The starting point for our assays is assay development, and that is crucial to being able to get robust and reliable data coming back. When I say one day for the screens, it's a little presumptuous, because we spend several months in assay development optimizing the cell culture conditions, optimizing the transfection conditions, et cetera. We have several ways of getting readouts. We have a molecular endpoint assay, so in cancer, for example, one of the things you want to do is kill the cells so you can measure viability of cells at the end of the day. In more of the subtle responses that we might want to look at, the high-content screening is very useful. As an example, we have some assays going right now where we're looking at cell proliferation rather than cell death, and seeing if we can block that proliferative step with siRNAs, and that would point to new and novel targets in the blockade of the cancer itself.

Have you moved at all away from high-content screening towards endpoint assays to evaluate the effects of your siRNA knockdowns?

It's a little bit of the opposite, actually. We start with endpoint assays in a primary screen, and moved more towards high-content screening both for secondary validation, and also, as I said, to perform some of the assays that you couldn't even do in an endpoint assay, where the variation in the cell number is not enough to detect. You'd really need to see a number of parameters in the cells at one time to know what the end effect is, and the only way to really do that is to visualize the cells. So high-content screening combined with siRNA is a very powerful technology in that it gives the capability to look at multiple events at one time, so you can look at the nucleus, cell morphology, other biological markers, like for mitochondrial viability. The cell itself will somewhat convey that it's not feeling well by changing shape, and that can also give you some indication. But one of the things that we're finding that was unexpected is that the cells divide in a different way when they're exposed to different treatments. You would never see that in an endpoint assay. You would see an increase in growth of maybe 10 percent, but you just assume that all the cells were growing evenly. We've found that there tends to be this nucleated growth effect under certain stimuli, and you'd never be able to identify that with an endpoint assay.

Your presentation abstract mentions that you've also been using HCS more in primary assays. This has been a challenge in pharma, but something everybody wants to be able to do. What are the stumbling blocks?

One of the biggest issues when you're doing high-content screening is deconvoluting the images in a rapid way. As part of our collaboration with GE Healthcare, we have access to a number of tools that have helped us in this regard. If you're doing high-content imaging, there are various issues you face. Storage is one, obviously, and that's one people focused on several years ago. Now that it's becoming a more standard platform, people are saying 'Well I'd like to do a more complex assay. How do I do it?' It's easy to do the assay — as biologists we can do a lot of different things to perturb the cells, make fluorescent molecules, and create things that fluoresce in one state but not in another. The problem for us, then, is how, across a 384-well plate and a 60- or 80-plate assay, we quantify that event in all of those wells accurately, with high precision, and in a meaningful way.

I think the software tools that have been developed to do image analysis have improved tremendously just in the last two years or so. Biologists are very creative, and I think the software was somewhat behind the eight-ball in meeting that creativity. One of the biggest improvements I've seen is in software like Developer Toolbox, which GE recently released. This tool allows people to dissect out an image, build an algorithm in a modular fashion, and as they're doing it, they see what the image is looking like. It's an extremely flexible package that allows you to ask a number of different questions about size, shape, granularity, and even intensity, and other features that would be difficult to address in some of the older software versions. That has helped us in looking at some of these very subtle effects when you treat cells with siRNAs. Other companies, like Cellomics, have developed their own packages. I'm using GE as an example, but I think it's true for many that the flexibility is there, so the biologists have taken the next step in applying that flexibility, and I think you'll start to see even more creative ways moving forward to be able to address cell-based assays with high-content screening.

Are you familiar with the cellular arrays for siRNA transfection being developed by David Sabatini's lab at the Whitehead Institute? Do you see any other technologies on your radar screen in this area?

Spyro Mousses of TGen had the first paper on RNAi microarrays in Genome Research some two years ago, and others have followed in the footsteps of that work. As siRNA has become more mainstream, people have begun to look at siRNA transfection, as well as cDNA transfection in that way. We have a very active program in terms of looking at that technology, as well, and we have a really unique way of doing it. Unfortunately I can't tell you about it. But we think that this will be able to get siRNAs into highly untransfectable cells, and combine it with the image analysis that you need to be able to do to quantify that result. I will tell you that there are many problems with that technology. The biggest one is that if you put a spot of material down, and for the sake of argument you have a 300-micron spot and a 10-micron cell, you can only get 30 cells across that spot if the cells are confluent. The problem is that you need to be able to know where the spots are located, and you need to get enough cells in there to get a quantifiable result. That result is confounded by the fact that the siRNA takes a while to get into the cell, and then it takes a while to have its effect in silencing that protein. Meanwhile the cells are dividing happily, and you may reach a point of overconfluency, and some siRNAs may have a toxic effect on the cells. Those will hopefully be easy to see, but others will have very subtle effects, but you may miss them because of this very delicate balance between the size of the spot and the number of cells you need to get on top of it to get statistically valid results. That's one of the confounding features of that technology, and we and others are working on ways to get around that issue.

File Attachments
The Scan

Shape of Them All

According to BBC News, researchers have developed a protein structure database that includes much of the human proteome.

For Flu and More

The Wall Street Journal reports that several vaccine developers are working on mRNA-based vaccines for influenza.

To Boost Women

China's Ministry of Science and Technology aims to boost the number of female researchers through a new policy, reports the South China Morning Post.

Science Papers Describe Approach to Predict Chemotherapeutic Response, Role of Transcriptional Noise

In Science this week: neural network to predict chemotherapeutic response in cancer patients, and more.