Skip to main content
Premium Trial:

Request an Annual Quote

Baylor s Michael Mancini Discusses High-Content Imaging in Cell Biology

Premium

At A Glance

Name: Michael Mancini

Position: Assistant Professor, molecular and cellular biology; Director, Integrated Microscopy Core, Baylor College of Medicine

Background: PhD, cell and structural biology, University of Texas Health Science Center National Cancer Institute Postdoctoral Trainee, University of Texas Institute of Biotechnology

Michael Mancini studies cellular transcription dynamics using an array of basic research tools. Just as pharmaceutical companies see the need to increase throughput and content in drug screening, Mancini sees the need for it in the academic research lab. A partnership with Q3DM founder Jeffrey Price started him down the path of high-content imaging, and he has since served as a scientific advisory board member for Q3DM, as well as Price’s latest biotech start-up, while maintaining his academic appointments. Inside Bioassays caught up with Mancini a few weeks after he gave a presentation on high-content imaging at June’s ALA LabFusion conference in Boston.

You conduct very basic cell biology research into the dynamics of transcription. Tell me a little bit more about that.

Essentially what we’ve been doing is looking at various nuclear receptors and co-factors in transcription at a single cell level, whenever possible. The routine, basic scientific approach is using high-resolution microscopy. At least initially, in the last few years, we’ve been looking at time-lapse ligand-induced changes of estrogen receptors, in particular, and antigen receptors, too. Along with others, we showed a few years ago that the ligand induces early organizational changes in the receptor in the nucleus.

We moved on after that to look at mobility and solubility, and this involves using photobleaching techniques, where you would bleach a green fluorescent protein fusion, and then monitor the recovery of the fluorescence. And that showed us that things are moving around a whole lot faster at a molecular mobility level than originally anticipated. The recovery was seconds, not minutes. So there were two parts of the dynamics. First, [there was] a spatial reorganization of the receptor. When we looked closer it showed that … these things were reorganizing into little tiny foci — hundreds or thousands of them — when you could fix the cells and look at them statically. It was quite a misrepresentation, actually, of what was going on, but those foci are rather transient entitites, and ligands regulate not only their formation, but also the mobility and exchange of these things, as well as the solubility. There’s a pretty good connection that when things are immobile, they’re relatively more insoluble — not all, though, and that was a whole different line of work that had a nuclear context [regarding] a protein folding disease: polyglutamine folding disease. The two actually overlap pretty well, but our main focus has been the transcription part — single-cell assays for the polyglutamine misfolded aggregates, if you will. That we’ve done with some high-throughput imaging, as well.

Take me through how you go from this basic cell-biology research to high-throughput or high-content molecular imaging and presenting on that topic at the ALA LabFusion conference.

That [research] led to understanding the dynamics of nuclear cell biology on a single-cell level, and two things were apparent: One, I was going to be extremely old before I ever got around to doing all the things I wanted to do, because this was extremely slow. Literally, it took us a couple of years to knock out a few compounds and see what they really do. So [I had] a rather chance meeting with the developer of a high-throughput instrument — Jeff Price of the University of California, San Diego, and his company was Q3DM. We met a couple years ago and started interacting and seeing if we could help each other. His little start-up company didn’t have a lot of resources, but we clicked pretty well on an intellectual level, and explored how we could help each other, and we’ve been collaborating ever since.

Subsequently, they’ve been bought by Beckman Coulter. I met [the Q3DM] people and learned about high-throughput imaging, lucky for me, from someone who probably built the best one out there — the highest resolution, which, going back to the cell biology, what we did to get away from the bulk nuclear issues is build cell lines that would allow us to see even further what we were interested in. Rather than just a couple thousand dots in the nucleus, we built cell lines in which we integrated promoters with transcription readout systems, and we built an on/off system with mammalian promoters. We were collaborating with [University of Illinois at Urbana-Champaign professor] Andy Belmont a couple [of] years ago, and we did some integrated DNA studies, but they weren’t really transcription units. And there was some great work by Dave Spector at Cold Spring Harbor — he built this combination bacterial system so you can visualize interesting things. And we kind of synthesized that all together, and this is what we’re about to publish. We said: “Look, let’s start from scratch, and build what we wanted to build, which is an integrated mammalian reporter. Let’s make a fluorescent reporter, and let’s visualize all these things.” This is where high-content comes in. So we’re visualizing nuclear cytoplasmic translocation, DNA binding, chromatin modification, recruiting of co-factors, and then ultimately transcriptional readout in the cytoplasm. That’s where the collaboration with Q3DM came in. They wrote some algorithms, and modified some old ones, and we went through some iterations of how to visualize this. And since we had made a cell line that we can put into multi-well plates, can we actually visualize inside the nucleus and quantitate the binding of the DNA to the integrated array. Can you measure — in the same image — the reporter activity using algorithms addressing different colors and things? So we have pieces of it all together that were excited about, and talking about it, and we’re just getting pilot studies off the ground.

The high-content part required the highest resolution possible. I mean, a 10X, 20X, even a 40X lens with a low numerical aperture wasn’t going to cut it. In fact, at the expense of speed, we’re testing oil lenses and trying to work out ways of doing 3D stacks. I don’t need to do 100,000 wells a day for this research. We can do primary and secondary screens as we get things going: Screen something, see if it looks interesting, and then go back and use the algorithms and really data-mine the images for high-content with as many algorithms as we can throw at the image. All that data is there — that’s the amazing part. There are a couple of groups of people out there: People who really get mesmerized by imaging and say: “My God, there’s all this stuff happening in that image. How do you pull that data out?” And then there’s people saying: “Oh, that’s just descriptive; how do you quantitate it?” And that’s the point. Now we’re able to quantitate things, and it’s not just a descriptive approach. One name I like to call it is molecular cytology. We’re genetically engineering cells with all these fantastic multiple fluorescent proteins and we can build things, visualize things, and integrate all of these things. So we’re talking about multiple assays, in a cellular context, in a well. There’s no need for five or six different assays, and frankly, I’m not so sure what DNA binding in a test tube means. Let’s see what it looks like in the cell, in the context of transcription, et cetera.

On that note, what are the specific challenges of having to work with live cells?

The live-cell biology — the painstaking, slow stuff — sets up mostly doing fixed cell plates. There are tremendous challenges for live-cell biology at a high-throughput level, which for the most part we haven’t really done yet, and I’m not sure we need to at a high-throughput level. One of the things we’re fascinated with is the mobility of these molecules, but there isn’t a high-throughput way of doing that yet. I would love to do high-throughput FRAP [fluorescence recovery after photobleaching], and we’ve submitted one proposal and one idea of how to build such an instrument, but that’s a few years away, probably. But it is possible. You could do high-throughput FRAP, which would be a 10-second experiment per well, and would provide another whole level of content — the mobility in the context of the cell and whether or not there’s transcription. All this stuff could be worked out — it sounds like science fiction, but it’s not that far away. So most of what we’ve done is fixed-cell stuff, where the machines allow you to do things with different time points, and we’ve done all of this by hand, such as pipetting into plates. We’re just gearing up to do more robotic additions of ligands. But you can throw in compounds in a 384-well plate, and then quickly scan what happened from a quick snapshot: Did it change your DNA binding? Did it change your transcription? Did it change your chromatin modification? It’s been a challenge to go from ultra-low throughput, where we plate out a couple dozen coverslips and take those cells and put them into multi-well plates. That’s actually not that simple, so we had to get used to that. And the robotics that we’re waiting for that we’ve ordered should make all that much, much simpler.

What are the specific hurdles in being able to perform a technique such as FRAP in high-throughput?

It’s not so much putting the live cell on a microscope and then going from well to well to well. It’s “How do you quickly bleach spots and then monitor their fluorescence quickly?” We can do a cell in a couple of minutes, and then we move to another cell, but we want to be able to do a whole well, somehow, at one time, where we can do a hundred or thousand cells, and then move on to the next one. We don’t know how to do that yet. We’ve got some ideas of how to, but there’s nothing built yet.

But to me — and I’m terribly biased — the amount of content you could mine from such an approach would just be staggering. We can predict at an ultra-low-throughput level whether or not a compound is an antagonist or agonist with pretty good certainty. We published something a few years ago in Nature Cell Biology —the differences between an agonist and antagonist with the estrogen receptor. If your receptor is immobilized, we’ve not seen any example of where that’s going to be active. Conversely, the receptors that are hyper-mobile — they don’t seem to work either. So there’s a sweet spot. I’d love to go that route and screen libraries for the ones that are immobilizers or the hyper-mobilizers, and then we can define a class as antagonists.

Such a tool, I would imagine, would be in high demand by pharmaceutical researchers.

If they can wrap their heads around it. They’re pretty much operating on really old technology, and getting them to understand what they can do with hyper-content is a bit of a challenge. You know, it’s “We do a luciferase assay for transcription, and that’s what we’ve been doing for ten years.” Well, let’s see if we can move you along a little. Actually, what’s exciting is that there are some roadmap initiatives with the NIH for academic drug screening. The NIH realizes — and I’m putting words in their mouths — that drug screening isn’t working so well anymore. When you think about the number of drugs that have gone to market — and I’m getting this second-hand, because I’m not in the pharmaceutical business — the success of getting a drug to market is ridiculously low. The amount of money that these companies are spending is preposterous to just get a few things out, and then “Boom!” there’s the next [blockbuster]. And they don’t share any information! All this biology has been learned along the way through all these failures, but they throw it away or they lock it up. So the NIH is trying to get people in the academic community to start going after small molecule discovery. And we’re used to things failing. We learn by it, and we share it. “Oh, this didn’t do anything — let’s publish a paper!” That’s unheard of in the pharmaceutical companies. So you might have billions of dollars going into the same type of drug, and companies do the same thing and find out that it doesn’t work, and all this money and time goes by, and to be melodramatic, all these people have died while they’re being tight with their intellectual property. So NIH is trying to fund this new wave of centers to go after that type of academic thinking — it’s not too dissimilar to, perhaps, the Human Genome Project.

The Scan

Fertility Fraud Found

Consumer genetic testing has uncovered cases of fertility fraud that are leading to lawsuits, according to USA Today.

Ties Between Vigorous Exercise, ALS in Genetically At-Risk People

Regular strenuous exercise could contribute to motor neuron disease development among those already at genetic risk, Sky News reports.

Test Warning

The Guardian writes that the US regulators have warned against using a rapid COVID-19 test that is a key part of mass testing in the UK.

Science Papers Examine Feedback Mechanism Affecting Xist, Continuous Health Monitoring for Precision Medicine

In Science this week: analysis of cis confinement of the X-inactive specific transcript, and more.