|
Michael White (right), professor of cell biology, and Angelique Whitehurst, a postdoctoral researcher at the University of Texas Southwestern Medical Center
|
Researchers from the University of Texas Southwestern Medical Center last week published the results of a high-throughput RNAi screening experiment in which they identified 87 genes that appear to sensitize human cancer cells to chemotherapy.
Using siRNAs from Thermo Fisher Scientific subsidiary Dharmacon, the researchers conducted a paclitaxel-dependent synthetic lethal screen in the NCI-H1155 human non-small-cell lung cancer line in order to “identify gene targets that specifically reduce cell viability in the presence of otherwise sublethal concentrations of paclitaxel,” they wrote in their paper, which appeared in the April 12 issue of Nature.
“Several of these targets sensitize lung cancer cells to paclitaxel concentrations 1,000-fold lower than otherwise required for a significant response,” the authors wrote.
The assay used a library of 84,508 siRNAs targeting 21,127 human genes arrayed in a one-gene, one-well format in 96-well plates. Cell viability was measured using ATP concentration.
This week, Cell-Based Assay News spoke with Michael White, the senior author on the paper, to discuss the advantages and the challenges of RNAi screening for this particular application.
Previous chemotherapy response studies have been performed with microarrays and other genomic platforms, so why did you go with an RNAi cell-based screen for this study?
One very straightforward issue is that many of the studies that you describe that employ microarrays or genomic analysis are very good at identifying correlative relationships with respect to the biology or the disease versus which genes are up and which genes are down. But those correlations represent both causal relationships and relationships that are bystander effects, so it’s very difficult often to identify, if you’re interested in that, the components that are actually responsible for the phenotype that you’re looking at when you’re doing those sorts of correlative analyses.
So you might identify a signature of, say, 57 genes that correlate with paclitaxel resistance, and that might be extremely valuable as a prognostic or a diagnostic tool, but it doesn’t necessarily tell you anything about the biology behind chemoresistance because those genes could be changing for a whole variety of reasons.
That was why we were really excited about being able to employ true somatic cell genetics because these screens are designed to reveal causal relationships rather than correlative ones. The nature of the screen is such that you are identifying the gene products that are somehow directly making a contribution to the genotype of interest.
If the synthetic lethal screen works on a one-by-one basis, in terms of the genes that are involved, how did you arrive at this specific set of 87 genes instead of, say, 86? Do these genes work in combination or can any one of these genes lead to the same result?
What we handed to the research community is a list of 87 genes, each one of which independently has a causal relationship with chemosensitivity in the genetic backgrounds that we tested. So the issue is not that they are necessarily functioning combinatorially, because, as you said, the way that we carried out the screen was to test every gene in the genome one by one for its capacity to sensitize the lung cancer cells to otherwise innocuous doses of paclitaxel. So that was the basis of the synthetic lethal screen.
So any of those genes by themselves aren’t particularly lethal in cells, but they can reveal profound sensitivity to paclitaxel when their expression is inactivated by RNAi.
And as for why it was 87 instead of 83, on the one hand it’s relatively arbitrary, but on the other hand, it’s a result of our efforts to use a blinded statistical algorithm to pick the hits. So we went through over 21,000 different tests, and this was in triplicate under two conditions, and what we wanted to do was take advantage of the triplicate analysis to be able to employ both magnitude of response together with reproducibility to create a statistical bottleneck that the data would have to pass through in order to be scored as positive. So that list is basically all the genes that passed that test. These are the high-confidence genes that make a contribution to the modulation of chemosensitivity.
Very likely, there are more than that, but we were more concerned about trying to stay away from false positives than trying to avoid false negatives.
Were there any particular challenges with this approach that you needed to overcome? For example, off-target effects are a common problem cited with RNAi screens.
We went into the study with a relatively strong understanding of the difficulties that can be associated a) with high-throughput analysis and b) with RNAi-mediated loss-of-function studies. We’ve been trying to employ that for a long time to derive biologically meaningful relationships.
So there are two things that up front during assay development we were very concerned about. The first was the best way to set up a high-throughput screen to avoid the noise that’s associated with this sort of analysis, like plate effects, well effects, things that can give you one to 12 periodicity because of the row of the plate, or one to 96 periodicity because of the position of the well in the plates in the stack.
We decided that we needed to do a two-condition screen in order to avoid that, so that we could do a ratio between the two conditions and every gene is architecturally identical. A lot of these plate effects get washed out that way. That turned out to be very effective in taking away the problem that people often have with high-throughput screens, which is regression towards the mean of their hits upon retesting. So that didn’t really happen to us, and I think it’s because we got rid of a lot of noise in the system in order to be able to enhance the signal.
The other thing we were concerned about is the off-target effects, and also the issue of RNAi being what people call hypomorphic analysis – you’re dealing with phenotypes in cells that are the result of reducing the concentration of a protein, but not eliminating it, and for every gene in the genome there are going to be optimal conditions to use RNAi in order to most effectively deplete the particular protein, but we can’t generate optimal conditions for all 21,000 genes or so in the genome. We had to pick one single condition. So we spent quite a bit of time trying to identify what the sort of happy medium was going to be for us to collect the most observations that we could, but realizing that there are going to be a lot of false negatives in there because of our inability to deplete that particular protein to levels that would result in a phenotype.
And then, lastly, with respect to the off target issue, what we decided to do was really to let the data speak to us in terms of generating psychological motivation to follow up hits, and the way we did that was by taking the understanding that if a biological system or biological machine is driving a process, that system or machine is composed of proteins that are expressed from multiple different genes that need to collaborate with each other in order to perform that function.
So if you’re doing an effective screen, the expectation is that if that system or machine is important, you should hit multiple genes that are involved in that system or that machine, and that’s in fact what we did. We hit multiple genes that expressed components of the proteosome, we hit multiple genes that expressed components of the gamma tubulin ring complex, and those sorts of observations allowed us to, I think, very effectively enrich for on-target consequences because the fact that you’re pulling multiple genes involved in the same process really now makes it highly improbable that you’re looking at an off-target effect.
So, for example, the probability that we would have enriched for the number of components of the proteosome that we had by chance is less than one in ten billion by hypergeometric distribution analysis. So that was one method that helped us reduce off-target effects, but of course, we had to go in there and make sure that for selected components that in fact we could get the relevant phenotypes by using independent reagents with independent sequences in order to make sure that we were in fact analyzing the consequences of inhibition of the gene that we were expecting to observe.
What are your next steps in this particular study, and then, more broadly, are you looking to apply this method to other drug screening problems?
That’s the big question. I think our next step for this particular study, beyond pursuing many, many interesting mechanistic relationships, is to find out whether these observations that we make in tissue culture cells also represent important relationships in the tumor. Everything that we did, because of the technology we were using, we had to do in homogenous cell cultures, and those don’t necessarily always recapitulate what happens in a patient, obviously. So what we want to do now is move into genetic model systems, tumor xenograft models, in order to validate some of these things for their capacity to sensitize these tumors to chemo in a more physiologically relevant context.
That would be with mouse models?
That’s with mouse models using human orthotopic xenograft models. And that’s currently ongoing.
And then, in terms of in general, we already are iterating this process with different sorts of synthetic lethal screens to ask different kinds of questions in different tumors, and I think this is one of the things that we were really excited about with this study, is it looks like it’s an effective and practical mechanism to collect core components that are supporting aberrant regulatory processes in cancer cells quickly and cheaply. So I think that many groups are probably going to employ this in order to generate the same sort of observations that we had in this particular system.
Any advice for other groups that might be trying this?
My advice would be to make sure they spend an incredible amount of time on assay development so that they know that they have a very robust and reproducible platform from which to perform the experiments, because that really, I think, defines whether you’re going to identify meaningful relationships at the end of the screen.
File Attachments
White_Whitehurst.jpg