Skip to main content
Premium Trial:

Request an Annual Quote

Scripps Harada on Using HCS to Conduct Large-Scale cDNA Screens in Cancer Research

Premium

At A Glance

Name: Josephine Harada

Position: Scientist, Cell-Based Screening Core, Scripps Research Institute, Jupiter, Florida

Background: Postdoc, Genomics Institute of the Novartis Research Foundation, San Diego, 2002-2004.


Josephine Harada is first author on a July 15 advance Genome Research paper on the use of high-content screening and image analysis to identify mammalian growth regulatory factors. The paper, which can be seen here, was a collaboration between researchers at the Genomics Institute of the Novartis Research Foundation and Beckman Coulter's Biomedical Research Division in San Diego; and the Scripps Research Institute and Vala Sciences in La Jolla, Calif.

Harada is now employed by the Scripps Research Institute's new branch in Jupiter, Fla., where she runs the Cell-Based Screening Core facility along with colleague Trey Sato. Harada and colleagues used a Beckman Coulter IC 100 imaging cytometer and associated CytoShop software to conduct their analyses — platforms that Harada was familiar with during her previous tenure at GNF. At that time, the IC 100 was called the EIDAQ 100, and was produced by biotech startup Q3DM, which was later acquired by Beckman. Harada and colleagues stuck with the platform as Beckman transformed it, and according to Harada, they have been pleased thus far with the new instrument, software, and customer support. Harada took a few moments last week to discuss the research with CBA News.

A lot of people are using high-content screening in these types of "genome-wide" screening studies lately — but many of them are siRNA knockdown studies. Your group instead used a cDNA overexpression approach. How do these two approaches complement or contrast with one another?

My experience has been that, when screening with cDNA collections versus siRNAs, it's generally been easier to pick out hits. Often times when you disrupt the cell's equilibrium and you over-express a gene, you'll see 10-fold or 100-fold activation; whereas with siRNA knockdown, you might see more subtle effects because you're really dependent on the knockdown efficacy of that siRNA. Often times you might end up with 50-, 60-, or even 90-percent knockdown — but there is definitely a range. And there are a lot of reasons you could see apparent knockdown of gene activity in a cell. The siRNA could be toxic, for example. There are a lot more counter-screens and secondary filters one has to consider when running a knockdown screen with siRNAs. Also, when you think about the absolute range of activation or repression — with overexpression and gene activation, that dynamic range is from one to infinity, whereas with knockdown there is a floor, and you're going from one to zero. I might add here that it's not really an either/or — these approaches are clearly complementary. We routinely use siRNAs to validate hits from cDNA screens, and cDNA over-expression to determine if the opposite phenotype may be observed from an siRNA screen hit. But based on our experience in running more than 100 genome-scale screens while I was at GNF, the cDNA screens have certainly been more productive.

Couldn't the same thing be said for cDNA screening — where over-expression causes unexpected effects in other parts of the cell, so that the phenotype you're seeing might not necessarily be directly attributable to the gene over-expression?

Yes, absolutely. And in both cases, siRNA and cDNA screening is limited by transfection efficiency. In terms of the relative ease of getting nucleic acids into the particular cell type that is most physiologically relevant — there are certainly limitations with both approaches. But high-content screening enables you to focus in on the responder population. With the screen that we ran, for example, we saw that there was a huge range in transfection efficiency from less than one percent to 80 percent. We were still able to get useful information out of those wells where there was only a one-percent transfection efficiency because we were able to focus our analysis just on that population.

Can you comment on how this translates to general drug discovery? This seems to be almost like a target ID method.

I would say that it all depends on how you define target validation. To me, target validation is efficacy in phase II clinical trials. But there is certainly utility in validating a target in a cell-based assay with siRNAs — it's more a matter of who's defining it.

But the cDNA approach can be considered target ID too, right?

It does, but with our screen, for example, we were looking more for cancer biomarkers. There were some hits from the screen where — when we systematically looked at their expression profile across a total of about 90 different human and mouse tissues, as well as primary tumor samples and cell lines — the result was very, very striking.

Your paper mentions how most of these analyses have been single, end-point assays that measure only whole-well fluorescence or luminescence. The cell arrays being worked on in the Sabatini lab at MIT (see CBA News, 6/8/2004) are an example of an image-based approach, though…

Right, and there is a paper by Amy Kiger [J Biol. 2003; 2(4): 27] which was an image-based screen, but they analyzed the images by eye. They might have automated the image acquisition, but the analysis was done with human intervention. With the Sabatini approach, they looked at discrete areas on a slide, where nucleic acids were spotted, and that ended up being an area of about 100 cells. With a cell line that is easily transfected, I would say 100 events is a significant number, where you can pretty comfortably make a conclusion. But if your transfection efficiency is 10 percent or less, then you're making a conclusion based on a few cells.

So your group was doing these assays in wells containing numerous cells, and was able to pick out which cells were transfected for your readout?

Yes — we imaged, on average, a thousand cells per well because we didn't know what would constitute a significant number of events. In FACS, that number is 10,000. We arbitrarily set it at 1,000.

Can you describe the high-throughput transfection process referred to in the paper?

It's a fairly widely used method termed reverse transfection, or retro-transfection. Rather than seeding the cells first and then introducing the DNAs with the lipid reagent on top, we pre-seed the DNAs into 384-well plates and then introduce the cells on top. It seems to work with nearly equal efficiency as the more conventional forward transfection method.

So this is actually similar to the Sabatini approach, except they did this on spots on slides, and you did it multiple times in wells of a well plate?

Exactly.

Moving forward, are you continuing this research, or was this more of an isolated project?

I am continuing the work, but now at Scripps Florida, where I maintain the cell-based screening core. The idea is to offer these genome-scale collections of cDNAs and siRNAs to investigators in the institute, and to maintain the automation that is required for these types of screens. We're also running a lot of external collaborations with scientists in the state of Florida, as well as other places, like the University of Wisconsin at Madison, and Scripps-La Jolla. My background is in using this platform, and we're going to continue to develop it.

Will there be any formal industry collaboration moving forward? I know that last time we talked, you made a point that Beckman [Coulter] has come in and helped develop new algorithms for your studies.

Yes, they've been great in helping us get set up, but we do not have a formal collaboration with them, or with Vala. But I would say that Beckman is ahead of the curve in working with a number of academic investigators in the development of new algorithms.

In the paper you also mentioned that future studies will likely require the generation of additional image-analysis algorithms. Did you have anything particular in mind?

We're interested in getting scientists from other sectors involved. Regarding the image-analysis problem — we mentioned in the press release [about this paper] that it's an adaptation of facial-recognition technology. The same problem is also seen in analyzing satellite images. I think that we need to get scientists involved from other industries where there have been similar needs, and where they have been looking at them for much longer than we have. So, for example, getting the folks at NASA involved might be beneficial. The way that I see the field going now is that there was a large period of technology development, and the focus now seems to be on making it more accessible to the average consumer or average investigator. It's nice to see companies like Vala who are still pushing the envelope of image analysis, and are focused exclusively on that space. Whenever you try to mass-produce something, it often comes at the cost of new technology development.

The Scan

Interfering With Invasive Mussels

The Chicago Tribune reports that researchers are studying whether RNA interference- or CRISPR-based approaches can combat invasive freshwater mussels.

Participation Analysis

A new study finds that women tend to participate less at scientific meetings but that some changes can lead to increased involvement, the Guardian reports.

Right Whales' Decline

A research study plans to use genetic analysis to gain insight into population decline among North American right whales, according to CBC.

Science Papers Tie Rare Mutations to Short Stature, Immunodeficiency; Present Single-Cell Transcriptomics Map

In Science this week: pair of mutations in one gene uncovered in brothers with short stature and immunodeficiency, and more.