Skip to main content
Premium Trial:

Request an Annual Quote

Harvard s Stephen Wong on Linking Medical Image Analysis and HCS

Premium
Stephen Wong
Director of the Center for Bioinformatics
Harvard Center of Neurodegeneration and Repair

At A Glance

Name: Stephen Wong

Position: Director of the Center for Bioinformatics, Harvard Center of Neurodegeneration and Repair (HCNR); Director of Functional and Molecular Imaging Center, Brigham & Women's Hospital; Associate Professor of Radiology, Harvard University.


Stephen Wong has published over 180 peer-reviewed papers and holds several patents in biomedical informatics, and has had R&D experience worldwide for two decades with entities such as HP, Bell Labs, the Japanese 5th Generation Computer Systems project, Royal Philips Electronics, Charles Schwab, and UCSF. Recently, he has turned his expertise in medical image analysis to image analysis in biological microscopy, such as forms the basis of high-content screening — partly because of what he believes is an image-analysis bottleneck in that area, and partly because he is interested in linking medical imaging data to biomolecular imaging data. He will be giving a presentation on his group's work at September's Society for Biomolecular Screening conference in Geneva, Switzerland. Last week, he talked with CBA News about his group's latest project, the CellIQ image analysis software, as well as his thoughts on how approaches used in medical imaging might be able to benefit microscopy image analysis in biological research.

Your background is first in electrical engineering and later in medical imaging informatics. How did you become involved in biomedical research informatics, such as image analysis?

I have always primarily worked in imaging, and most of my experience is mostly in medical imaging. In the last few years, we've been looking at translational research, because mostly, in radiological imaging, the resolution is not as good. This work will allow us to go down into the cellular and molecular genetics level — that's why we've been focusing on the area of cellular imaging. Some of it is at the molecular level, too.

So you had been using traditional medical imaging techniques, and now you are focused on fluorescent imaging and microscopy?

Yes, we tried them, and it doesn't work. We also have synergy with our labs doing other things. I have a couple of appointments — I'm in radiology myself, so we have assessed all of these human and animal imaging techniques. But within the HCNR, we also have confocal and high-throughput screening platforms, like the GE Healthcare IN Cell series. Our collaborators have also used the [Molecular Devices] platform, and we have about five or six different high-content imaging systems around. Our idea is really linking the medical imaging and the microscopy and cellular imaging together.

Your group is developing methods to automatically extract and analyze information from image data, and claims that this area has been a bottleneck for conducting cellular imaging screens. Can you elaborate?

The experience with all our collaborators has been that their first question when they come to us is whether we can help them analyze the images. All of them generate images from about five or six projects going on in the different labs, and different departments, from genomics and RNAi, all the way to time-lapse cytometry. And always the first question they ask me is whether I can help them analyze the images. Basically they can develop the bioassays, and the equipment is available, but the existing software just cannot do the job.

There are several commercial solutions out there — are none of them satisfactory?

We tried them all already, and no. Coming from a medical imaging domain, first of all, I was shocked at the lack of precision in biology. That's one thing. Biology is happy with a 60- or 70-percent accuracy sometimes, and when we are working with patients, we want 99.99 percent. So there is a cultural gap, first. And secondly, really, a lot of these are cutting-edge projects, where we're really pushing the envelope. None of the software work for these projects. So really, the image analysis in the life science area still lags 10 or 20 years behind medical imaging. Most people who have worked in medical imaging in the past then go into life science. The only reason they have lately is because in the post-genomic era, people have become more interested in how to link all this information back into biochemistry.

The FDA even has recently made overtures about the importance of linking medical imaging down through to molecular imaging…

Yes, and that's exactly what we are doing. We also have another center in Brigham and Women's Hospital. So I have two appointments, we have two centers running.

Tell me a little about what departments at Harvard that are part of this collaboration?

If you looked at a list, it is almost all departments: genetics, systems biology, cell biology, neurobiology, neurology even.

What exactly are you developing, what you are calling the Cellular Imaging Quantitator, or CellIQ?

We are sort of systematically trying to create an independent package, so people can take it and run with it. The CellIQ is funded by the NIH, and that is focused on cells, in time-lapse microscopy. The first thing we've been looking at here is a lot of cell cycle stuff. We also have been looking at some fixed cells, which is easy for us — time lapse is more complicated. Right now I have three different packages running — one is focused on cells; one is focused on neurons, including spines and dendrites, and calculating volumes using two-photon confocal microscopy; and the third one is for zebrafish. It's the same principle, but each one has to use different algorithms. So architecturally, if we tried to squeeze everything into one, the package starts to get huge. It's not agile enough. So we have each group dedicated to solving the problems in their specific areas.

So is this up and running?

I would say it's working in the laboratory environment, but I wouldn't say it's completely developed. We have quite a number of publications out already, but most of them are in the engineering domain, not the biology domain, because it's very technical.

Will this be intended for basic research laboratories, or will it carry over into high-throughput research, or pharmaceutical discovery?

I think it will be both. I have had people at Novartis contacting me, trying to get us to help them with similar problems.

So is this technology something you hope to commercialize?

Probably not — we are open-source people here, and we are part of the National Centers for Biomedical Computing. Last year they funded four centers, and one of them is in the medical imaging area, and I am part of that consortium, for medical imaging computing. Initially we started with MRI, and our group has now moved its efforts into microscopy, because this has become a very challenging problem that we haven't seen before.

Are you familiar with the CellProfiler work being done at the Whitehead Institute? How is this similar or different?

Yes, there are a lot of people doing a lot of good work. That is a different scale, though. The algorithm they are using is nothing new — it is an existing algorithm. A lot of biologists are definitely not trained in image analysis. They can sort of tune their bioassay to meet their needs, but we are more interested in pushing the envelope towards more complex bioassays that cannot be done with existing methods, so we also develop new algorithms to address this.

So do you expect that any type of microscopy platform might be able to be analyzed using your software?

Well, I would think it would be modular. I've been in this business long enough to know that no one package can solve all imaging problems. Most imaging of bioassays requires a unique solution. So our package is really open-source and modular. I can imagine later, for example, that there is some other kinase where you need another algorithm developed. When people tell you in any medical imaging field that there is a universal solution, it is a lie.

What type of specific applications is your group using this for?

Multiple areas. We are very disease-focused at the moment. On the more basic science side, we have people in Norbert Perriman's lab, and they've done a lot of genome-wide RNAi screening, very similar to Whitehead, except for on a larger scale, and he's really facing the same problems: he generates tons of data and doesn't know what to do with it. So we're helping them to do this analysis. We have also worked in cancer, on mitotic drug testing. Our projects also involve 2D and 3D neurons, probing different pathways, and looking at tuberous sclerosis complex, putting in different genes and using RNAi techniques to see how the morphology of the neurons are changing. So basically it's a methodology — we deal with 2D, 3D, and 4D data sets, and we try to look for features that normally cannot be easily [detected] by existing methods. For example, we look at [neuron] spine volume and density — there is no solution out there for that. A booming area is neuronal assays, but the problem is that people can do the assay, but they cannot analyze the data. In the cell, they can at least visually look at it, and have a poor post-doc circle all of these features. But neurons are tough — you have thousands of spines in one dendrite. We're interested in this because we're interested in neurodegeneration. We also have people working with animal models, and really we're working at multiple scales. We have a hypothesis that requires us to go down through different scales and try to link the findings together. The analysis for us, though, is really only the first step. More interesting is doing data modeling afterwards. Image processing is a means to extract information into some sort of numeric data. Once you convert into alpha-numeric data, then you can do a lot of biological modeling, clustering, data analysis, and biostatistics. This is really important to drug companies, where, for instance, they might want to look at a population curve, and look at the control population and diseased population. A lot of people can do that, but the problem is that none of them have very good data to do it on. That's why image analysis is becoming so important.

File Attachments
The Scan

Unwrapping Mummies' Faces

LiveScience reports that Parabon NanoLabs researchers have reconstructed how three Egyptian mummies may have looked.

Study on Hold

The Spectrum 10K study has been put on hold due to a backlash, leading the researchers to conduct consultations with the autism community, Nature News reports.

Others Out There Already

Reuters reports that Sanofi is no longer developing an mRNA-based vaccine for SARS-CoV-2.

PNAS Papers on GWAS False Discovery, PRAMEF2 Role in Tumorigenesis, RNA Virus Reverse Genetics

In PNAS this week: strategy to account for GWAS false-discovery rates, role of PRAMEF2 in cancer development, and more.