Skip to main content

Jonathan Minden on Differential Fluorescence Gel Electrophoresis


At A Glance

Name: Jonathan Minden

Position: Associate professor of biological sciences, Carnegie Mellon University, since 1991.

Background: Post-doc in developmental/cell biology with Bruce Alberts, University of California, San Francisco, 1986-90.

PhD in biochemistry and molecular biology, Albert Einstein College of Medicine, New York, 1985.

BS in chemistry and biochemistry, University of Toronto, 1980.


How did you first get involved with proteomics?

We’ve developed [DIGE], which Amersham is selling. I had the idea back in 1981 when I was a first-year graduate student. But at that time I didn’t know how to do the chemistry or how to do the imaging or any of that stuff, so basically it got left on the back burner for over 20 years.

What prompted your idea for DIGE back in 1981?

Way back then, I was in the lab working with Dictyostelium, and I had mutants that could not move, and I wanted to figure out what was wrong with their cytoskeletons. Dictyostelium back then didn’t have very good genetics so I figured if we used a protein-based approach we’d be able to see the protein differences between a moving and a non-moving cell. The idea was, if I could figure out a way of running both samples on the same gel, I would be able to very quickly figure out what’s changed between the mutant and wild type cell. But I didn’t have the chemistry background to actually synthesize all the dyes.

How did the idea come up again?

So it came up again 20 years later when I was just starting my faculty position here at Carnegie Mellon and I switched organisms to one that did have good genetics —Drosophila. We were working on a very similar problem of how cells change their shape during gastrulation. And so the problem came up again of how you do it. Genetics had given us some answers, but none of the genes had anything to do with cell shape or controlling the cytoskeleton. So I figured maybe we should try this protein-based approach. And I was lucky enough that here, the person who’s developed all the cyanine dyes for biological use is Alan Wagner. So he and I sat down at one lunch and I told him the chemistry I needed and he said, ‘That’s simple,’ and he designed it. Then we got a chemistry graduate student, Mustafa Ünl , and Mustafa came in and synthesized the dyes and it basically worked right away.

What was available before to do these sorts of things before DIGE?

Before we got on the scene, basically people just ran 2D gels and tried to compare them. The major limitation is that no two gels are exactly alike. And the other main limitation is that they used either silver staining or Coomassie Blue to visualize the proteins, and those are not linear responding dyes.

When did you license DIGE to Amersham?


Have you worked on any updates since then?

Yes, with Amersham. The original dyes were used at very low stoichiometry and the new dyes are cysteine-reactive so they’re much brighter.

What sorts of new technologies are you working on now?

We’re developing a new technology for purifying whole cell extract proteins and separating away nucleic acids and lipids and salts and any other interfering compound[s]. It’s a matrix that captures the proteins and allows you to wash everything else away and then reveal the proteins later. There’s a small company, ProteoPure, that’s developing it and will be beta testing in a few months.

The other technology we’re developing is software to automate the comparison between different images. Amersham sells something, but we think we can do it a lot better and cheaper. That’s probably a year off.

The third technology is probably a few years off, but we think it will potentially be very important. Right now, with the minimal labeling dyes, we can see proteins at about a 50-fold higher sensitivity than mass spec can identify them. Typically, when we cut a protein out of a gel, the minimal amount of protein is about 10 fm. We can see below half a femtomole of protein. The fluorescence-based system allows you to be very sensitive in terms of collecting photons from the fluorescently-tagged protein, whereas most mass spec destroys the peptides in the process of analyzing it. So you only have one shot of reading it. The FT-IR machines are able to hold the peptides so you don’t lose it, but they’re awfully expensive.

So we’re at the computational point of figuring out a fluorescence-based method for identifying proteins. We always think of antibodies as being monospecific — we want our antibodies to see only one protein and nothing else — but if we turn that problem around and say, ‘I want that antibody to recognize commonly shared peptide sequences or epitopes,’ then what is the smallest number of short epitopes that are required to uniquely identify every protein in a proteome? When we did that simulation with yeast, we found that 44 antibodies [were enough] to uniquely identify every single yeast protein.

So if you have a chip that has these 44 antibodies on it and wash a fluorescently-tagged peptide that originated from the protein of interest over the chip, it will bind and create a barcode of what that protein is. That’s the idealized version. Like in mass spec, you lose a certain fraction of proteins and there are always false positives and contaminants and things like that. [But] we did a simulation saying, ‘what if you lost 40 percent of your peptides and that there were 2 percent false positives and you had a mixture of five random proteins?’ And when we did that simulation, we found that about 500 antibodies was sufficient to uniquely identify all five proteins. So we’re in the process of developing those antibodies. The beauty of it is that the proteins will be fluorescently tagged, so the sensitivity is at the same level as our DIGE technology.

The simulations have been done in yeast but it will be the same for any organism basically.

What biological projects are you working on right now?

[We’ve been] looking at protein changes during gastrulation in fruit fly embryos. The two main take-home messages were that we originally thought that we’d see protein changes exactly at the time that the cell shape changed. And we found that 90 percent of the protein changes were happening before the cell shape changed and that only a small number of proteins were actually changing at the time of the cell-shape change. But when we knock out any of these proteins, they all cause cell-shape defects. So once we identify a protein change, we then use RNAi to knock it out and see its impact. Every protein we’ve tested so far has validated. And now we’re currently working on the next round.

So one of the proteins that changes during cell-shape change is proteosome proteins. So if we now do DIGE on embryos that have the proteosome knocked out by RNAi, we can see the next level of protein changes. So basically it’s like turning a crank — you discover what proteins change, [then] change those proteins, and then discover the next level of proteins that change. So now we’re on the second round of doing that, and hopefully we’ll be publishing that by the end of the year. We’re applying the same approach to looking at cell death in fruit fly embryos.

One other paper that’s under review is where we ask a question about biological noise in yeast. In the microarray world, where you just compare identically grown colonies of yeast and look at their expression levels, there’s something they call biological noise and they found something like 300 messages that were changing two-fold at random. We looked at the same proteomes in yeast, and found that they don’t change. Looking at the genes of these gene products that are changing, we don’t see them changing at all. And so the conclusion is that the proteome is buffered relative to the transcriptome.

The next step from that is to look at mutants in yeast genes that don’t kill the cells [and asking] ‘how do cells adapt to genetic change?’ When we do that we can see the pathways that are affected by these genetic changes. And so far it’s working out beautifully.

What difficulties do we still need to work through technically in proteomics?

Number one is experimental design. If you’re going to be using a very sensitive technology, then you need to design your experiment to reduce any variability. So people say, ‘Oh, I want to look at differences between lung cancer tissue and normal lung to find disease markers for lung cancer.’ But if you do that in humans, the tissue has to be exactly the same. We’ve done studies looking at morphologically identical regions of a brain that are just adjacent and seen major protein changes. So if you’re going to be taking tissue from an animal and trying to compare it, you have to know exactly what you’re getting. We’re starting to work on serum proteome comparisons, and the same is true there — your serum proteins are going to change whether it’s Sunday and you’re relaxed, or Tuesday and you’re all pepped up at work. So there has to be great effort in designing the experiments properly. And there also has to be an improvement in sample throughput. So one of the things that gets in my craw is that the mass spec people say, ‘Oh, we have incredible throughput — we can sequence 50,000 peptides in two days’ in one of their multidimensional chromatography MS experiments. But that’s one sample on one half-million dollar machine. So they have tremendous peptide throughput. But in my laboratory, we can run 40 2D gels a day and take pictures of them the next day. So we can’t identify the proteins as quickly as they can, but we can find the biologically significant differences at a much faster rate. [Gels] also have a much higher dynamic range than mass spectrometry.

The second thing that gets in my craw is, we’re always talking about the statistics of our measurements, where the mass spec people will publish a paper using one sample, and saying they got all these proteins, but they don’t say if they ran the same sample the second time whether they get all the same proteins or what the overlap is. Statistical reproducibility is absolutely essential when you want to look for disease markers. But if you can only run one sample every two days on a machine, then you’re not going to get enough information to really get good statistics out.

The fourth thing is cost. Running a 2D gel and taking pictures of 2D gels is a lot cheaper than running a mass spec facility. So my feeling is, any biology laboratory out there can develop the tools and afford the tools to run 2D gels and find the biologically significant differences and send their sample off to a mass spec facility.

So you think it makes more sense to use a central facility for the mass spec work?

Absolutely. You need at least one or two PhD-level people running a mass spectrometer. But I’ve had high school students run 2D gels and get beautiful results.


The Scan

Not Kept "Clean and Sanitary"

A Food and Drug Administration inspection uncovered problems with cross contamination at an Emergent BioSolutions facility, the Wall Street Journal reports.

Resumption Recommendation Expected

The Washington Post reports that US officials are expected to give the go-ahead to resume using Johnson & Johnson's SARS-CoV-2 vaccine.

Canada's New Budget on Science

Science writes that Canada's new budget includes funding for the life sciences, but not as much as hoped for investigator-driven research.

Nature Papers Examine Single-cell, Multi-Omic SARS-CoV-2 Response; Flatfish Sequences; More

In Nature this week: single-cell, multi-omics analysis provides insight into COVID-19 pathogenesis, evolution of flatfish, and more.