Skip to main content
Premium Trial:

Request an Annual Quote

Researchers Use High-Content Imaging to Demonstrate 'Non-Randomness' of Nuclear Proteins

Premium

Paul Freemont
Professor and head, molecular biosciences
Imperial College London
AT A GLANCE
 
Name: Paul Freemont
 
Position: Professor and head, molecular biosciences, Imperial College London, since 2005
 
Background: Director, IC Center for Structural Biology, 200-2005; various positions, Imperial Cancer Research Fund (now CRUK), 1987-1999; Postdoc, molecular biophysics and biochemistry, Yale University, 1984-1987; PhD, biochemistry, Aberdeen University, 1984
 

 
Scientists from Canada’s Imperial College London and the Cross Cancer Institute in Edmonton, Alberta, have defined the spatial associations of a key transcriptional regulation protein in nuclei, called CBP, and found that clusters of the protein are not arranged randomly.
 
The research, which is published in the Oct. 20 issue of PLoS Computational Biology, is an example of how sophisticated image-analysis techniques are being used in combination with light microscopy to elucidate basic intracellular protein networks and interactions.
 
The scientists hope to use similar techniques to define the spatial relationships of numerous nuclear proteins in an attempt to understand how these relationships change in diseases such as cancer.
 
Paul Freemont, a professor at Imperial College London and one of the lead authors on the study, took a few moments last week to discuss his group’s work with CBA News.
 
What you’re doing sounds a lot like high-content imaging.
 
There is a huge imaging aspect to the work. One of the goals for most biologists interested in how the nucleus works is to try to do live-cell imaging. There are, however, a number of restrictions on how many components you can visualize in live cells, and this is using fluorescently labeled fusion proteins such as GFP. There are also issues with making stable cell lines expressing the GFP fusions, and although you can get them down to endogenous levels, it’s still often against the background of the wild-type component. And one is never completely certain about what one is doing to the cell. Huge advances have been made in live-cell work, but we thought there was still quite a lot of work to be done in fixed cells. So the images we were looking at were fixed and stained for endogenous components. They are high-content images, but the advantage of using fixed cells is that you can look at a very large number of cells and, we felt, through more quantitative methods, define specific spatial issues. This would be much more complicated in a live-cell situation.
 
A good deal of high-content imaging for is done on fixed cells. This method has seen rapid uptake in drug discovery and functional genomics, but you’re using it to understand the physical localization of nuclear proteins in cells, correct?
 
Right. And that can be broadened out to anything you can image or visualize. There were two real goals here. One was fundamentally trying to understand how the nucleus is organized and works, and the other was to develop a toolkit to begin to statistically quantify spatial relationships within cells. And of course, that can be made more general. Anything you can visualize you can treat as an object, and therefore carry out these types of spatial analyses. The questions need to be driven a little bit by biological questions, but I can see the advantage in a higher throughput system where you’re actually looking at changes in localization due to whatever – whether it’s some kind of drug testing, or pre-diagnostic situation. When you start to move into tissues and other samples from pathology labs, there are some technical things that need to be sorted out in terms of what you can visualize, especially in terms of the nucleus. Some of the components we were looking at are quite difficult to visualize in pathology sections. But those things are being overcome, and there is a great deal of interest in cancer pathology circles to visualize some of these nuclear components in more detail and with higher resolution. In cancer, it’s been well established that significant changes occur in the nucleus from normal to cancerous cells. People are trying to get more sophisticated and look at more sensitive markers where they could detect things that are pre-cancerous, and use some of our biological understanding of some of the factors involved.
 
You found that the protein CBP was physically close to various structures in the nucleus, not on a random basis. However, you didn’t necessarily see the protein interacting with these structures.
 
That’s right. In the light microscopy world, these foci, or clusters of molecules don’t necessarily overlap with each other. Even in the resolution of a light microscope, you wouldn’t have seen them mixed up together. However, there is a spatial relationship. In other words, there is a preference to be associated with something over something else. Unless things are actually on top of each other, intermingling with each other, there are lots of relationships that need to be explored at this level where things are, for functional reasons, juxtaposed to each other – not necessarily interacting with each other, but within a localized compartment within a cell or the nucleus. This is a deeper understanding that people have not managed to quantitate properly yet.
 
What are some of your thoughts about why these molecules might be located near each other, but not necessarily interacting?
 
I think resolution is an issue. With light microscopy, for instance, you can be looking at essentially tens of thousands of macromolecules. Molecular biologists might be looking at the function of single macromolecules or proteins. When transcription occurs, one is always considering single transcription factors binding to transcriptional activation sites. So there is a scale issue in terms of what one is looking at. We’re not looking at individual proteins – we’re looking at a scale which is more like tens of hundreds of proteins. In the nucleus specifically, some of these are organized in such a way that there are these foci – often called nuclear bodies – around which very high concentrations of components exist, which you can visualize by light microscopy and is what we showed. This is one of the great issues that we don’t fully understand. Why would these concentrations of macromolecules be close to one another? It all comes down to the idea of having functional compartments – something at a meso scale level, or a level higher than the molecular level, where you’ve got compartments being arranged or concentrations of molecules being actively utilized. There are different levels in considering functionality.
 
Are you familiar with the work being done by Robert Murphy and colleagues at Carnegie Mellon? They are doing what they call ‘location proteomics,’ which seems to be in the same vein as your work.
 
Yes, I’m a little bit familiar. It is very much similar. And it’s not just where proteins are in the cell, but how they are in relation to where everything else is. When you start doing more sophisticated statistical analyses of that, you can start coming up with these issues of probability, which are terribly important. On average, things occur where they do because they have some sort of relationship, not just because they are randomly juxtaposed. Unless you do these kinds of analyses, you may not uncover these relationships. That philosophy is what is driving our work, and the idea of building up some sort of topographical probability map – specifically in the nucleus – is very important to us. The nucleus is very restricted in terms of its volume, very compact, with a lot of activity, but no clear higher-order organization.
 
I noticed that you used some tools commonly used in high-content analysis, such as Molecular Devices’ MetaMorph software and some Applied Precision image-analysis software. But it also seems like you wrote your own analysis algorithms. How did these different image-analysis approaches fit together?
 
We wrote most of the analysis algorithms ourselves. Why? There is a lot of commercial software out there to do 3D reconstructions from image slices, and there is a lot of good stuff out there. We haven’t used any fundamentally different approach to this. As researchers, we just felt that we wanted it in our own code so we could make adjustments, and develop how it might work in a research environment to suit our own needs. Often with a lot of software that you buy off the shelf – and I think this is a common problem with people who are more quantitative – you can’t actually get hold of the code; you can’t adjust it or modify it to your specifications. Having your own allows you that flexibility. From that, we’ve now got new stuff that allows us to very easily identify objects automatically in 3D image slices. It just feels better.
 
My own background is in structural biology, so I’m quite used to dealing with algorithms and computational code and such, so it wasn’t that big a deal for us to do. It felt natural. It’s not that we don’t like the commercial software, but in a research environment, where you’re trying to push approaches that might need you to manipulate what you do, you need your own tools. And the statistical tools that we’re developing – we have some unpublished work using these that will be coming out – and we’d like to make these available to the academic community to use.
 
What’s next for this research?
 
I think we’ve just developed a way of taking images and automatically identifying objects, which exists in commercial software, but this routine is extremely efficient and easy to use. This means we can take any image data set with three-color labeling and quickly and automatically define what object is what. One of the problems with imaging, of course, is background noise. And one of the problems with some of the experimental techniques cell biologists use is that you can get unusually high backgrounds that are not meaningful. We’ve had to work pretty hard on that area to make sure we can sort those problems out.
 
We’re looking at something like 12 or 13 different nuclear components and trying to build up a spatial map to define relationships between them. We’re also trying to look at different cell lines. These are generally primary cells that we’re using. We’re looking at cells becoming transformed, and then using our statistical toolkits to see if there are spatial differences in the locations of these components. We would like to begin to see at this level whether there are some interesting spatial reorganizations as a function cell differentiation, or transformation, et cetera.
 
Extending that to other stuff – it can depend on what the biological problems are. I think Murphy’s group is clearly looking at this from a more high-throughput proteomics view. We’re driving it through biological interest in the nucleus and how it is organized.

File Attachments