Skip to main content
Premium Trial:

Request an Annual Quote

Wyeth s Haney Discusses HCS and RNAi for Target ID and Validation

Premium

At A Glance

Name: Steven Haney

Position: Senior scientist, Department of Biological Technologies; group leader, Oncology Genomics; and manager, High-Content Screening facility, Wyeth Research, Cambridge, Mass.

Background: Senior research scientist, infectious diseases, Wyeth-Ayerst Research, 1997-2001; Senior scientist, Cadus Pharmaceuticals, 1995-1997; Research fellow, molecular biology, Princeton University, 1991-1995; PhD, biological chemistry, University of Michigan, 1991.


As part of Wyeth's department of biological technologies, Steven Haney is responsible for helping Wyeth adopt cutting-edge research tools, including high-content screening, into its every day drug-discovery routine. In particular, Haney has developed a strong interest in the intersection between HCS and RNAi, and has given several talks on the matter at biotech conferences this year. Most recently, he wrote a review article on the topic, which appears in the December issue of the journal IDrugs. Last week, Haney discussed his work with CBA News.

What is your role at Wyeth?

I am in the biological technologies department, which is designed to either champion very sophisticated technologies, such as transcription profiling, or play a role in enabling departments to take these technologies on themselves. There is a mixture here — some departments will take on a very complex technology, and others will kind of ask us to fill that role on an ad hoc basis. So there are a number of technologies we deal with — proteomics, biomarker development, therapeutic and preclinical antibody generation — that all fall under the biological technologies umbrella. High-content [screening] is one of those. We do it in the research areas and in high-throughput screening, but our role here is to develop the technology outside of its core applications.

How did you first become involved with HCS and RNAi for target validation?

We started out with RNAi, and at the time, there were either reporter-based assays or phenotypic assays, such as whole-well caspase activation assays. It worked well. We ran transcription profiling experiments, and generated huge numbers of genes we needed to validate. RNAi was clearly a much more effective tool than, for example, platform technologies such as introducing dominant-negative mutations in kinases, or something like that. It was just a very robust technology. We started there, and then asked 'How do we do a better job of validating the target?' So we actually had problems with whole-well assays, particularly with RNAi, and that's because the dynamic range of RNAi is so broad, that some RNAi's act within hours and some act within days. We needed some way to capture more than just caspase activation across a wide dynamic range. That's one of the reasons we got involved in high-content screening — it enabled us to say much more about what was happening in cells than straightforward enzyme activity was able to do for us.

So it was a matter of being able to see the process of target knockdowns over a longer period of time in living cells?

Well, for example — in the case of a fast-acting siRNA, where we look, say, three days out, we don't see caspase activity because the cells are gone, not because it's not active. So a whole-well assay that measures a single enzyme reporter doesn't capture why we're not getting the responses. It could be because the siRNA is not effective, or it could be because the siRNA is hyper-effective, and has a very short time of onset. With high-content screening we actually captured cell number data in addition to antibody data, but even within high-content, the basic parameters like nuclear size, were able to gain a lot of additional information about what the RNAi was doing in the cell.

Seemingly one of the benefits here is that you are obtaining more information than you were even looking for… but how are pharma companies and Wyeth in particular, handling the massive amounts of data that are being produced?

It's a huge challenge, and it's one that our own bioinformatics department within the biological technologies department has played a critical role in. The corporate data structure is capable of handling large amounts of data, but even at the corporate level, high-content data is just off the scale in terms of the amounts of data that you produce. So we've actually had to buy our own servers, and we're in the process now of integrating that into the rest of the company, as we get more high-content platforms in the research areas. We've certainly generated enough data — four people in one year — as an entire research department did in transcription profiling arrays over the course of a lifetime. And we run thousands of chips a year. So the growth in data requirements is something that we clearly had to wrap ourselves around and take seriously.

As an add-on to that, what are the challenges of integrating HCS data with that from other discovery tools, like expression arrays, proteomics, etc.?

If you want to link high-content data with other data, such as transcription profiling or compound inhibitors, you need to be able to access the images, particularly across sites. So once again, if something comes up in a screen, you can actually export the results fairly easily. But the utility of high-content is actually examining the images and probing deeper into the data. Getting access, particularly across sites, is a big challenge.

What types of HCS technologies are you using?

In terms of an imager, we use the Cellomics system, and we do use their data archiving platform for our data storage. Beyond that, we've actually started mining the data outside the Cellomics bioapplication, so we're starting to export the data. The transcription profiling informatics group is actually looking at the data, so we're developing strategies for normalizing it, with an eye toward actually clustering it, much like a transcription profiling experiment. This is going to take a few years to get working really well. It's really easy to do, but none of these approaches is validated yet, so the validation stage is going to take some time.

Has your group been applying this mostly to cancer cell lines?

We certainly have done a lot with oncology. Much of oncology research is very pathway based, so it fits with RNAi very nicely. But high-content is certainly readily applicable to metabolic diseases, inflammation — a lot of the same pathways are seen in some of these systems, but it's just a question of what the assay itself is.

RNAi and HCS seem like such a natural fit together — is the overlap with RNAi and target validation the most promising application for HCS in drug discovery?

I think the most promising thing for HCS is that you're doing your validation and potentially high-throughput screening on the same platform — the same cell-based assays. With better antibodies becoming available, in some cases you can actually do a specific target phosphorylation in a cell-based assay, and derive the same quality of data you could with an in vitro assay. But because you're doing it in a cell, your validation and your high-throughput screen can really be in the same assay. That holds tremendous potential for eliminating some of the disconnects that occur when taking a validated target and doing a high-throughput screen. A cell-based high-throughput screen really holds an advantage over an in vitro assay if it's done the same way as the validation experiments. It means that you're really affecting the target in the biological context in which you validated it in the first place.

What are some of the challenges that remain in combining these two technologies?

Well, going back to the temporal range of RNAi, it just makes it a very wide, dynamic process. In addition, with RNAi, one of the things we see is a much greater range of stochastic events during the RNAi response that are different from, for example, small molecules. Small molecules, in particular well-characterized ones, have a very narrow effect on a cell, but RNAi experiments tend to have a very wide range of knockdowns. That confounds a lot of experiments, because you wind up with a very wide error bar and more mediocre results. High content, particularly when you actually analyze the data on a single-cell level, gives you the potential to remove some of the artifacts from the RNAi treatment, and focus on the knockdown effect in a better, more concise way.

What types of improvements will make this more conducive to a higher-throughput drug discovery atmosphere?

In a high-throughput drug discovery atmosphere, stability is really a key. It really has to be a robust assay. High-content screening has made significant inroads here in the last couple of years. At the validation stage, it's really all about flexibility. You really do have to challenge your assumptions about what the biology is, rather than simply taking a literature target, and you need to develop an assay around that. The thing about high content that I like is that you can pare it down from the complexity of the cell-based assay that you used to validate it, and actually move toward a robust, high-throughput assay. That will be a critical link in eliminating a lot compounds that drop out later because they don't have any effect on the disease.