At A Glance
Name: Larry Sklar
Position: Professor of Pathology, University of New Mexico School of Medicine; Co-director, National Flow Cytometry Reosource, Los Alamos National Laboratory
Having been involved in flow cytometry for over 20 years, Larry Sklar has seen the technique evolve — an evolution to which he contributed. Early in his career, Sklar saw ways that flow cytometry could be improved for drug discovery, and now, as principal investigator of an NIH-funded bioengineering consortium for flow cytometry-based drug discovery, he hopes to help commercialize a unique flow cytometry platform for that very purpose. Sklar took a few moments last week to discuss with Inside Bioassays the “evolution of flow.”
You’ve been involved in flow cytometry since the early 1980s. How did you initially develop your interest in flow cytometry?
That’s a good question, and I don’t think I’ve ever been asked that before. But I do have an answer. I was trying to develop ligand-binding assays for cell-surface receptors. And the types of things that people were doing at that time were things like ligands that changed their fluorescent properties upon binding, or it might be a lifetime change, fluorescence polarization change, or spectral change. I wanted to see what happened when you looked at cells that had bound ligand in a flow cytometer, and it was actually my first flow-cytometer experiment. What I saw was, to my surprise, that when you did this binding experiment, and you added your fluorescent ligand, the signal was essentially the same whether or not you washed away the excess unbound ligand. That meant that the flow cytometer was intrinsically discriminating the free component from the bound component. That is, you could actually see the bound component whether or not there was free component there. There is an interesting reason for that. The cell is measured as a fluorescent pulse above a background, and the background signal has contributions from the free [component]. But you’re looking at the signal above the background, or the bound, and the flow cytometer intrinsically discriminates this, in a homogenous way. For me it was a really remarkable observation. I think people who understood flow cytometry understood that this is what you’d expect. That particular observation had not been exploited significantly for measuring ligand-receptor interactions — or more generically — molecular assemblies. People knew about it, but it just hadn’t been vigorously exploited.
In the early days of flow cytometers coming into the laboratories of biologists, there were a few of them around, they were just becoming commercially available, and they were being used primarily for antibody types of experiments, where people would label cells with antibodies and then wash away the unbound antibody to make their measurements.
Around that time, it seems that flow cytometry became all the rage, but then went away for a while, but is now making a comeback. Have you seen that trend?
Yes, absolutely. When the commercial machines became available, there was a lot that could be done that wasn’t research. So flow cytometry became widely used in clinical diagnostics, leukocyte subsets, disease diagnostics, and the technology became very widely distributed — 20,000 or more flow cytometers around the world. People were using [them] for routine types of things, and then for their own special applications in research. During the 1980’s, there was a concentration of ongoing technology research into a small number of labs. So there were groups, for example, at Stanford and Rochester [Institute of Technology], and there’s been a group at Purdue for a long time. There were groups at national labs — Los Alamos and Livermore — and people began thinking more about where the technology was actually going to be developed. For example, Stanford concentrated on what might be called high-content — multi-parametric analysis. I actually left Scripps Research Institute, where I was doing research, to [go to] Los Alamos, where I could do development of flow cytometers. I wanted to bring kinetic analysis to flow cytometry, because ligand-receptor interactions in biological systems may occur over seconds and minutes, but the signals that come out of them may occur in a sub-second time frame. I wanted to have flow cytometers that could make measurements of cell physiology in the appropriate time frame to understand what was going on at receptors. So I went to Los Alamos to build sub-second time resolution kinetic devices. And other places have focused on other aspects of flow cytometry, but there were only a handful of places that were developing novel instrumentation. There was a little bit going on in some of the companies in terms of different modalities of detection and different sample handling.
Many screening techniques, people view as complementary, but there does seem to be a bit of a divide between flow-based and plate-based approaches. Do you see these as competitive, or complementary?
They’re very complementary. I’ve thought about how to express this. There’s an initiative that you may be aware of at the NIH called the Molecular Libraries Screening initiative [see Inside Bioassays, 7/27/2004]. Our expectation was that lots of groups that went into it were doing plate-based techniques. Whether it was imaging or not, they were going to be plate-based. We thought about how to represent the differences between plate-based techniques and flow cytometry.
They both have the potential of using small volume. Flow cytometry is interesting in that it’s equally happy dealing with particles or cells, and some plate-based techniques can go either way. With imaging techniques, there’s no particular reason to use an imaging plate-based technique to do particles, where the labeling is more or less homogenous. So one of the real strengths of the plate-based techniques is when you’re looking at cellular topography, because you can just image that. That’s a little bit more difficult in flow cytometry, because you have actually set up your experiment in a way that you can discriminate topographical features, like something that’s inside or outside — you might have to wash away something on the surface so that you know your signal is from the inside of the cell, like in a receptor-internalization assay. In that sense it would require an extra step as compared with microscopy.
On the other hand, flow cytometry is particularly good at this discrimination between free and bound. Microscopy does that under some conditions where you’re looking at depth of focus and you can see particular levels on the cell or particle.
There are really a lot of aspects to this. Flow cytometry is really good at doing kinetic analyses, because you can put rapid-mix devices — stop-flow devices — in front of a flow cytometer. Microscopists talk about doing high-content. You can do that in flow cytometry in sort of the same way, where you have multiple parameters, but you can also use those parameters to do multiplexing — to run many assays simultaneously. Virtually any fluorescence assay can be adapted to the flow cytometer, and in some cases it’s easier because you’re not going to be washing things and sedimenting them on a surface. So a lot of it has to do with the nature of your sample. If you have adherent cells, there would be no point in suspending them and running them in flow. If you have cells that don’t want to adhere, you might be very happy with a flow cytometer.
Another difference that’s been talked about is that there isn’t really a problem with the throughput of flow cytometry …
It is [a problem], because you mean two things by high-throughput. The way people typically talk about high-throughput in flow is the number of cells per second. And flow can do that.
But if you’ve set up an assay where each cell is an individual assay — let’s say you’ve put an expression library into cells, and each cells is expressing something different, then each cell can be an assay — that’s another type of throughput.
In addition, what happens when you want to go from sample to sample? In other words, you want to screen your cells against different compounds. That other type of throughput has not been available in flow until recently. Basically, you were in a manual mode, where each sample was put on individually, or you had these auto-samplers that did a sample every minute or so. Now some of the commercial devices are getting to two samples or so per minute.
So there are different types of throughput considerations. Our contribution is the latter — doing whatever you can do with flow, but be able to go from sample to sample very quickly, to potentially a sample per second.
Your lab has developed a unique flow cytometry platform. What is different about it, and how can it be used in drug discovery?
Let me start with motivation. We were trying to build devices that could make measurements quickly. That was basically a stop-flow mixer in front of a flow cytometer. As we became more aware of bioassay development in the mid 1990’s, we realized that flow could be applied in this arena if we thought about what it would take to do an assay fast, but then over and over again. We started up with similar concepts, and they evolved, and we wound up with a different device, but we were basically thinking about what flow cytometry needed to do to be competitive in this drug discovery capacity. And that was fast sampling, and then assays of not only cell physiology, but assays that were like what people were doing with molecular assays — molecular binding events with surface plasmon resonance, or scintillation proximity assays, or fluorescence polarization. So we wanted to bring a high-throughput front end to both cell-based and molecule-based assays.
What we wound up with in the second iteration, in the technology we call HyperCyt, is really a very simplistic notion. If you can attach an auto-sampler to a flow cytometer, all you need to do is go in with a probe from the auto-sampler, go into one well of a multi-well plate, pick up a few microliters, and then use a peristaltic pump to pull the sample into the flow cytometer. As we move from well to well, the pump keeps working so that in the time we move from well to well, we pick up an air bubble. So each sample has an air bubble behind it, and the next sample isn’t picked up. We can walk through a 96-well plate as fast as the auto-sampler can move. By changing the peristaltic pump rate and the tubing dimensions, we can create volumes and flow rates that are compatible with a flow cytometer.
What’s the commercial status of this?
We have a partner who will build these units on demand for people who are interested in them. The partner is HT Micro, which is a group in Albuquerque that is interested in microfluidic technologies. And this is actually microfluidic by definition, but what we’re doing is sort of interfacing the micro world with the macro world. We’re sampling from wells, but we’re only typically using one or two microliters with this device.
We think that this technology is going to allow us to do something on the order of high-throughput assays that are multiplexes of 10 or 20, and then do 40 or 50 of these multiplexes per minute, for 1,000 assays per minute potentially. Any flow cytometer is going to be capable of doing these assays, so we think the business opportunity is really interesting because there are already 20,000 or so of these operating around the world. The front end that we are interested in providing for a fairly low cost could make any flow cytometer into a high-throughput flow cytometer. All of these types of assays wind up being available in the context of instrumentation that is widely distributed. That’s our vision — it’s cell-based, but you have the molecular option.
Who holds the IP on this?
The intellectual property is held by the Science and Technology Corporation of the University of New Mexico.
How has your lab been using the technology?
We’ve actually been doing some of our own drug discovery, working on G-protein coupled receptors. We’ve been using competitive binding assays based on fluorescent ligands, where we already know something about the receptor, and we’re looking at libraries of compounds that we’re getting primarily from Chemical Diversity Labs in San Diego. We’re not doing brute-force screening. We’re doing computational pre-screening by identifying a pharmacophore, and then selecting a subset of the chemical diversity library that is compatible with our pharmacophore. We’re screening libraries that represent about one percent of their libraries.
We’re trying to make the screening process more efficient, as well. At the same time we’re applying the hardware, we’re developing the other pieces to make screening more efficient. As an academic environment, we’re not entirely comfortable going into screens in a blind way. We want to use information that we can get about the receptors before we start.
We’re doing cell-based assays for drug discovery, but we’re also using flow cytometry to understand the mechanism of molecular assemblies. As an example, one of the things we’ve focused on is the problem of how you study membrane receptors when they interact with things on the inside and outside of cells. If you leave them in the cell, you can only speculate about what they’re doing on the inside. So we’ve become interested in solubilizing membrane receptors and putting them in detergent, then associating them with particles. If they’re in detergent, they can interact with both intracellular and extracellular components. In the article [we published] in Trends in Pharmacological Sciences, [2004 Dec; 25 : 663-9] we’ve been putting G-proteins on beads or microspheres, and then interacting them with a soluble receptor. That assembly happens when a ligand is there. Our work has had to do with discriminating full agonists from partial agonists, and we use this approach to develop a theory that describes the difference between these. In the G-protein coupled receptor arena, there are some interesting questions that remain about what happens during cell activation. Which parts of that ternary complex come apart? Because we’ve been able to study those pieces individually, we’ve been able to develop systems that discriminate which part of the complex comes apart when. We’ve also been doing this in a multiplexed fashion … where you can discriminate a partial agonist, a full agonist, and an antagonist. That’s just a simple multiplex, but you can imagine doing many types of these where you have different families of G-proteins and different families of receptors, and begin to look for specificity, and that begins to sound like proteomics.