At A Glance
Name: Anne Carpenter
Position: Postdoc, Whitehead Institute for Biomedical Research, MIT
Background: PhD, Cell Biology, University of Illinois, Urbana-Champaign — 2003; BS, Biology, Purdue University
Anne Carpenter is an up-and-coming cell biology researcher in David Sabatini’s laboratory at MIT’s Whitehead Institute for Biomedical Research, and is a member of MIT’s campus-wide Computational and Systems Biology Initiative. She is not only working with RNA interference, transfected cell microarrays, and high-throughput microscopy, but is also designing new automated imaging software. Last week, Carpenter took a few moments to discuss her work and ideas about systems biology with Inside Bioassays.
Regarding your work in automated imaging and cellular arrays: How did you become interested in these areas?
The automated imaging started in graduate school when the work that I was doing required basically staring at samples for hours on the microscope, trying to decide if there was a difference between two different samples — the control and real sample. And it was so frustrating to look at the samples for such a long time and not really be able to come to a complete conclusion and to have it be very subjective. I wanted to be more quantitative, and I wanted to have more objective, unbiased results, so that led me to start collecting images — still by hand — but to develop some very rudimentary software to measure the things that I was looking at, which at the time was chromatin structure. And so once I saw how much the automated image analysis improved the results — suddenly we had objective, quantitative results coming from images — I became really excited about collecting images faster. So that led us to automate a microscope that we had in the lab. It had a motorized stage already, so we just programmed it so it would collect images automatically. So once that was set up, we were suddenly able to [do certain things]. I think my first project in grad school I spent two months, at least four hours a day, collecting images by hand, and it was so incredibly tedious. Once I got the automated microscope set up, in a week I collected more data than I had collected my entire graduate career up until that point, and it was able to be analyzed automatically, as well. So just the incredible increase in throughput made me see the power of this technology, and become interested in it as a career.
[Regarding] the cell arrays, I had read the original paper published by the Sabatini lab, which was a Nature paper (Nature, 2001 May 3; 411(6833): 107-10), and I just thought it was terribly clever. I didn’t think about whether it was going to be relevant for me, in particular, but I just thought it was a really clever idea and set it aside early in grad school as something that I might be more interested in, so when it came time to look for postdocs, that was one of the labs that I considered joining.
Are these two techniques different means to the same end?
They’re actually extraordinarily complementary technologies. Cell arrays are useful for looking at thousands of samples on a single microscope slide. So you can look at those samples just by eye, if you have a really obvious phenotype — say you’re looking for increased staining of a certain protein — you can look at the arrays at a very low magnification and see bright spots where you have a hit from your screen. But with automated imaging, if you go to higher resolution and record images and analyze them, as well, you have much more ability to look at interesting phenotypes — either subtle phenotypes that you wouldn’t be able to see just looking at the array by eye, or phenotypes that are just not feasible to see from the bird’s-eye view. So it’s looking at localization of proteins, looking at changes in protein levels, and all kinds of similar things.
Would you characterize the microscopy as being more high-content and the arrays as being higher throughput?
That’s not wrong, but it’s not how I would put it. Automated imaging does allow you to do high-content assays, and the cell arrays you can look at in either high-content or high-throughput mode, depending on how you want to look at them.
Is one more suited to a particular application or type of screen than another?
No, I think that really you get the most power if you use them both. I think automated imaging just makes cell arrays that much more applicable to a lot of different phenotypes, whereas without automated imaging, the work would just be so tedious to look at high-resolution images by eye that it wouldn’t be feasible.
David Sabatini, who is well-known in the RNAi field, heads your lab. How do either of these applications fall in with RNAi?
We’re so fortunate that RNAi has become really reliable and feasible over these past few years, because it’s a perfect complement, again, to these cell arrays. So previously, in the initial publication, the cellular arrays allowed us to look at overexpression of genes in different spots on a cell microarray. But with RNAi we can now do loss-of-function experiments, and so it brings the power of a traditional genetic approach to mammalian cell types and to Drosophila cells in culture, and allows you to do essentially genetics in cell culture.
All of this work falls under the umbrella of systems biology, at least according to the lab’s website. That’s a term that is getting thrown around a lot lately, but different people seem to have different definitions. How do you characterize it, and how do you think the concept will affect biological research in the future?
Well, the traditional biology approach has been to choose a gene of interest, and to perform a variety of assays on it. You might look to see: Does my gene of interest produce a protein that binds to this other protein? Does it localize in a certain place? Does it perform a certain enzymatic function? And so you choose a gene and perform a variety of assays.
What these new high-throughput approaches allow us to do, and this falls under the umbrella of systems biology, is instead of choosing one gene and performing all these assays, we can study the entire genome — every single gene — and perform the same kinds of assays. So this approach allows you to not only confirm whether your gene of interest is involved in some particular function, but it allows you to go ahead and screen the entire genome while you’re at it. The conceptual approach is not really different from traditional biology; it’s just a matter of being able to answer a particular question about all the genes in the genome instead of just one. So with that information, it allows for an unbiased screening of genes, so we are uncovering things that we would not have figured out just tracking down genes and performing assays in a linear manner.
As far as impact on the future of biology, I think that a lot of academic labs will be transitioning to doing these high-throughput screens [for] anything that can be converted to high-throughput format: If an experiment is worth doing once, it’s worth doing 6,000 times for yeast, or 14,000 times for Drosophila. So I think we’ll be seeing this transition occur more and more in academic labs, and as such, we’ll probably start getting surprises as approaches become less biased towards candidate genes.
Did you develop this new automated microscopy platform or software for it? Are there any plans for commercialization of any of these technologies?
Automated microscopes are very readily available these days, so pretty much from any microscope company you can buy an automated microscope off the shelf, and it will come with some sort of software to control the hardware, at least, and maybe do some rudimentary image analysis, as well. There are also commercial companies that have produced systems that are geared more towards pharmaceutical companies, that are more in a box format, and already set up for high-throughput experiments. So that covers all the automated microscopes. The one that we bought in graduate school just required programming the hardware to collect images in the way that we wanted to, so it was published in a paper, but it was not necessarily commercially viable.
The cell arrays were, as I said, originally published in that Nature paper, and are being used by academics in many different laboratories, and if companies are using them, they license the technology through the Whitehead Institute. That was developed in the Sabatini laboratory before I arrived.
And the software for automated imaging has really been the bottleneck so far, especially from the academic perspective. Software was developed primarily for pharmaceutical applications, for very simple readouts; for example, looking for cell lethalities, such as counting cells, or just looking for changes in localization between the nucleus and cytoplasm — very simple outputs. There hasn’t been software that’s as flexible and useable for academics, especially, but even for pharmaceutical companies wanting to do something more interesting or complicated. So we saw that need when I joined the lab a year ago, and my project has been writing this software to fill that need. It’s called Cell Profiler, and when it’s published, we will make it available to academics for free, and will just charge a nominal fee to commercial users, just to [provide] the technical support for the software.
Would you care to comment on any of the specific microscopy or imaging platforms that you use in the laboratory? Do you favor any specific vendors?
Not really. The only things I can comment on would not be positive [laughs], so I think we’d better just leave that lie.