Skip to main content
Premium Trial:

Request an Annual Quote

Garry Nolan Discusses High-Throughput Flow Cytometric Analysis


At A Glance

Name: Garry Nolan

Position: Associate Professor, Genetics, Stanford University

Background: Postdoctoral fellowship, Stanford/MIT/Rockefeller University — 1989-1993; PhD, Stanford University — 1983-1989

According to your laboratory homepage, your team primarily uses flow cytometry and fluorescence-activated cell sorting to investigate immune cell signaling in HIV infection. Is that correct?

It’s not just HIV infection, actually. It’s any disease incidence, mostly in the immune system, because the approach is amenable to that. We have a background in HIV because of some earlier work I did in the 1990s, but I would say that primarily, the flow cytometry work for phosphoprofiling is focused on cancer and autoimmune disease. We actually do have some recent applications for viral infections such as HIV, but that’s really more in the early stages. So although we have lots of work in the past with HIV, the phosphoprofiling is a new technology we developed in the last couple of years that’s getting a lot of play and interest, but its application to HIV was just formulated.

So tell me a little bit about the phospho-profiling technique.

Really what it’s about is that if you think about how flow cytometry has been used in the past two or three decades, it primarily has been a tool of immunologists, who use it to define subpopulations of cells in the immune system, based primarily on surface markers. You’ve got so many T-cells, so many different types of T-cells, you’ve got so many B-cells, et cetera, and it’s been used, for instance, with HIV patients, to say ‘Well, if your CD4-to-CD8 ratio goes down, then you’re progressing in the disease.’ More recently it’s been used in autoimmune disease to say ‘If you’ve got the following kinds of T-cells that can secrete the following types of cytokines, then that says you’re a TH1 or TH2 type of disease, or you’ve got this many kinds of T-helper cells of one kind or another.’ And the ratio of those two kinds of T-helper cell types will determine, say, whether you’re likely to become autoimmune, or whether you’ll have an inflammatory reaction, or whether you can suppress an autoimmune or inflammatory reaction.

And also, flow cytometry has been used to look at, for instance, dendritic cells, and their activation states. Dendritic cells are, of course, the primary antigen-presenting cells. So we’ve been looking at all these things, all these populations, but it’s really been almost a form of phenomenology: Inferring from the relative numbers of cells what the immune system status should be — based on a lot of correlations to what we observe is actually happening to an animal or a human with that kind of profile of immune system cells. Yet we’ve not really been able to ask the questions about what’s going on in the biochemistry inside of those cells.

So on a parallel track over the last 20 or 30 years has been the other great enterprise of biomedicine, and that’s been molecular biology and biochemistry. And that’s worked out for us the pathways leading from, say, cell surface receptors that are found on all kinds of cells as they sense the environment, through internal proteins connected to those cell surface receptors, and those internal proteins being enzymes, like kinases, that will transmit signals into the cells, into the network, where the network then makes a decision and either tells the cell to do something else, like release cytokines, or might send a signal into the nucleus to order patterns of gene transcription to change that will then cause the cell to go off and do something else — differentiate, die and apoptose, et cetera.

So, the problem, of course, with the biochemistry of the last 20 or 30 years is that it’s all been done with lysis buffers. It’s been after we break open the cell because technology has not been resolved enough to look at, in a refined manner, what’s going on in single cells. What happens when you grind up a population of cells is that you lose all that cell-by-cell information, or it requires you to pre-purify the cells in some manner, and then do the biochemistry. Well, the world’s best way of purifying cells and knowing exactly what it is that you’ve got — especially if you’re trying to purify cells on the basis of multiple surface markers, which is how we define cells, mostly — is flow cytometry. So can we come up with ways of skipping having to do that pre-purification process and do the biochemistry at the same time as we’re measuring the cell surface? And so that was really the genesis of the approach developed in the lab by Omar Perez, a student in my lab who is a first-year student — but he got his PhD in two years, so he did extraordinarily well and has won all kinds of awards for his work.

So basically, it was coming up with a series of techniques. And believe me, it really isn’t easy to simultaneously stain cells on the surface and get antibodies into the cells, of course after fixing and permeabilizing them so that they still retain the architecture and are still recognized as a cell. But you can stain with antibodies with various fluorophores on the cell surface to go with the classical way of delineating and defining the cell surface, and then reach into the cell with other antibody sets and other fluorophores that are looking at phosphorylation states — the proteins that we think are relevant for signaling.

So what you do then is you use antibodies with a more advanced staining technique, because, depending on the cell type and the targets you’re attempting to get at, you have to vary the technique slightly or considerably. And then you go to the flow cytometer, which then reads the cell parameters as read out by the fluorophore at the rate of 50,000 or 100,000 cells per second, but you’re getting 10 to 15 parameters per cell. And then you use the DNA analysis software on the far end after you’ve collected all the data to now look in this 10- to 15-dimensional data for relevant sub-populations. So what we can do now is the type of biochemistry that people were limited to in cell lines — but now we can do it in primary cells, now we can do it in patient samples, now we can do it in mouse primary cells and ask all the kinds of cool stuff that we’ve been asking about cell lines. We know that cell lines are decently reflective of what’s going on in the primary cells, but now we can nail down what’s going on in these, and because we can do it with all these parameters simultaneously, we can get the gestalt, you know, the all-at-once measurement of all these different pathways, and when you see these correlations happening right in front of your eyes, the conclusions you can make are rich, at the least.

What is the difference between flow cytometry and fluorescence-activated cell sorting (FACS)?

FACS was a trademark terminology developed by Len Herzenberg — I was a grad student with him — which they had as they were developing the machine in association with Becton Dickinson. Flow cytometry is a more generic term that covers a lot of different variations on the theme. If you think FACS, you generally think of the BD flow cytometer. And so, for instance, if you write a paper and refer to FACS when you really mean flow cytometry, some people get a little bit irked because there were multiple developers of the technology around the US. If you think about who has used it the most, certainly Len Herzenberg is near the top of that list, if not at the top. So, it’s kind of like a Hoover and a vacuum cleaner.

Have you compared or thought about comparing flow cytometry with some newer methods of high-throughput cellular analysis, such as automated microscopy?

We have, and my limitation is this: I don’t think that there’s anything wrong with the static imaging approaches, as they’re sometimes called, except that a lot of work has to go into defining what the cell is you’re working with. The amount of data storage and computation that they have to deal with is not insignificant. They do get a level of information that we can’t acquire, such as sub-cellular information, or they’re more able to work with cells that are fixed to a plate, let’s say. But I think that what we’ve been doing is directly applicable to what’s been going on there — I mean the staining procedures, essentially the histology that we’re doing has to be worked out in exactly the same way. In fact, if anything it’s going to be a little more difficult, because you can’t wash cells as easily on the plates, and you’re always worried about what you lose from the plate. So from what we’ve seen — and I’m very mercenary — if I thought there was something better than flow cytometry, we would do it. It’s also something I was talking to some industrialists about. What they’re looking at and what they say is ‘Yeah, there are 20 different platforms out there for doing microscopy, but there’s a huge installed user base for flow cytometry, and it’s a whole industry unto itself.’ There are all kinds of people who are already skilled or experienced for 10 or 15 years in the use of these sorts of things. And flow cytometry already is in clinical trial work, unlike chips, for instance. Chips are a great technology, but you don’t see them in any of the FDA-approved clinical trials. We’ve been using flow cytometry to validate whether a person has HIV, or at least the [disease’s] stage for 10 or 15 years. And again, I’m not trying to be disrespectful to the other technologies at all — they certainly have fantastic uses, and I’m probably going to be trying out a number of the platforms to see which one gives us the most information and doesn’t force me to store gigabytes of data or reduce those pictures that they take into format files that are similar in context or complexity as what we get with flow cytometry, but without besieging me with terabytes of data that I’ve got to worry about how to store.

That being said, do you see ways that flow cytometry needs to be improved?

Yes, probably, the most obvious improvements are probably on the analytical side. Automated handling of multiple samples is an area that needs to be dealt with. Another area that needs to be dealt with, frankly, is robotic setup of the complex staining. It’s no good if I have a high-throughput robot to load the samples if I can’t produce enough samples for the robot to use.

Coupled to that, these stains are very complex, so programs up front that help you with the staining process are needed. Just because I have markers one through ten, four of which are on the cell surface and six of which are intracellular — I know what they are — doesn’t mean that, for instance, I have all the antibodies with the right fluorophores that will all work in combination. And so if I’ve got a lot of different considerations about how to do that staining, I’ve either got one of five experts on the planet who knows how to do this quickly, or I embody their knowledge in a series of programs. The Herzenbergs are actually working on this quite diligently — and we’re helping them, especially from the intracellular side — that gives the world a leg up so that they don’t have to be soaked in the deep lore of what’s required to do these stainings. I think the machines are already fantastic, especially BD. I am associated with them monetarily, so I do have a conflict of interest, but I do think they have one of the best series of machines on the planet. I mean there’s at least one other vendor that I respect greatly, and that’s Cytomation, and I also respect some of the Beckman-Coulter machines, but especially some of the BD machines are allowing for high-color analysis in ways that others really aren’t getting to yet.

So on the far side, after you’ve done the analysis, the issue that’s really coming to a head is data storage — indexing of what it is that you’ve done. You know, knowing that it’s not sitting on 50 different CDs around the laboratory, but knowing that its actually cataloged somewhere on a central server — I think that’s important.

And finally, how do you deal with multidimensional data? I mean, if you’re not Stephen Hawking … how do you think in 11 dimensions? How do you present data in 11 dimensions? How do you find clusters of information that are existing and changing in 11 dimensions, how do you assign numerical value to those clusters of information? We go up against the issue of: We have this kinase and phosphoprotein that’s changed in its activity or status; well, has it changed in a quantitative manner? Has it changed in a quantal or qualitative manner? When you have multiple sub-populations in these various end states, if you want to call that a profile to say that you’ve profiled a certain disease or state, that it’s reflective of the pathogenesis of the disease, what signature value do you use? How do you extract that information from what are essentially multi-dimensional blobs? It’s a very difficult mathematical problem. I sit down in front of mathematicians and statisticians here at Stanford, and luckily they get fascinated by the problem. Often they have developed these statistical or mathematical regimes that are playthings to them. They’re just sets of rules that fall out of assumptions about a series of problems, but usually their problems have no connection to real time, real space, and real life. And when you sit down with them in front of a problem, they go ‘Oh my God, this is just an example of this!’ And then they get really excited, and nobody likes anything better than to have the idea they’ve been thinking about for the past five or 10 years to be shown to have relevance.

And so we try to excite the computational mathematicians and statisticians in this area, and luckily we’re gaining alot of traction. If you see any kind of a movement in the last five years or so in the mathematics field, they’ve begun to move into the realm of biocomputation, because finally, many of the biological fields have begun to produce the amount of information that is just beyond understanding, and that’s of course when the mathematics get difficult and when the mathematicians get interested.

The Scan

Self-Reported Hearing Loss in Older Adults Begins Very Early in Life, Study Says

A JAMA Otolaryngology — Head & Neck Surgery study says polygenic risk scores associated with hearing loss in older adults is also associated with hearing decline in younger groups.

Genome-Wide Analysis Sheds Light on Genetics of ADHD

A genome-wide association study meta-analysis of attention-deficit hyperactivity disorder appearing in Nature Genetics links 76 genes to risk of having the disorder.

MicroRNA Cotargeting Linked to Lupus

A mouse-based study appearing in BMC Biology implicates two microRNAs with overlapping target sites in lupus.

Enzyme Involved in Lipid Metabolism Linked to Mutational Signatures

In Nature Genetics, a Wellcome Sanger Institute-led team found that APOBEC1 may contribute to the development of the SBS2 and SBS13 mutational signatures in the small intestine.