Skip to main content
Premium Trial:

Request an Annual Quote

Tom Kodadek on Antibody Sandwiches and Protein Chips in Kitchens

Premium

At A Glance:

Name: Thomas Kodadek

Age: 43

Position: Professor of internal medicine and molecular biology, University of Texas, Southwestern Medical Center, since 1998.

Director, Southwestern Center for Proteomics Research, funded by NHLBI.

Background: Professor of chemistry, University of Texas, Austin, 1987-98.

Post-doc in biochemistry, University of California, San Francisco, 1985-87.

PhD in organic chemistry, Stanford University, 1985.

BS in chemistry, University of Miami, 1981.

 

How did you get involved with proteomics?

I’ve always been interested in protein recognition compounds both for analytical purposes as well as for manipulating protein-protein interactions. I’m mostly interested in peptide mimics. There’s a basic theory out there that the kind of really tiny molecules that the drug companies love are great for that purpose because they sit in these big canyons that represent enzyme active sites. But in general, surfaces of proteins that enter into protein-protein interactions are much shallower and much bigger. These tiny molecules are actually quite poor in disrupting or modulating protein-protein interactions. It’s like the princess and the pea — it’s too small a thing for them to feel. So we think we’re going to have to make synthetic molecules that better mimic these interaction domains — much bigger things that look vaguely like the peptide they bind.

Another thing that pulled me into proteomics is that I got involved in a big genomics program at Southwestern as a result of my collaboration on some transcription products. I’ve been more and more impressed with the power of genomics over the last few years and I’d like to get the protein technology up to that level of firepower.

What are you working on at your NHLBI center?

This NHLBI project was written up specifically with the point of developing new technology for proteomics. We really believe that the seminal problem in proteomics is to develop better technology to quantitatively measure protein levels and activities and post-translational modifications. And that’s still very hard.

Leading contenders nowadays to do that stuff are the increasingly powerful mass spec techniques — ICAT technology has been a big addition to that. On the other hand, I think you can make a strong argument that protein-detecting microarrays would be highly competitive with that — in fact [in] many ways superior — if you could make them, which you can’t right now.

Why can’t we make these microarrays yet?

This is what we’re spending the vast majority of our time on. The problem is that to make a protein-detecting microarray of any kind of coverage, you need hundreds or thousands of specific capture agents. Nobody has that.

How are you trying to solve that?

In a number of ways. First of all, my collaborators here, Ross Chambers and Stephen Johnston, have been revolutionizing the way we make antibodies. They’ve recently adapted a system called genetic immunization — they just published v.1 in Nature Biotechnology. Genetic immunization is a way to generate antibodies without ever having a protein epitope. The idea is that you make a piece of DNA that encodes the epitope of interest, and Ross has figured out a bunch of bells and whistles that you can attach to it that force the mouse to get going a rip-roaring antibody response to this epitope. It’s kind of a trade secret right now what those bells and whistles are, but they’re very effective. There are coding sequences that encode peptides that do things like multimerize the epitope. There are [also] epitopes in there that stimulate T-cells — things like that.

They take an artificial gene and code it onto gold BBs. Then they use a gene gun — which is really just a shotgun — to shoot these BBs into the ear of the mouse. It’s a real kind of Texas procedure. Enough of these things get into dendritic cells, which are the main antigen-presenting cells. Then the DNA falls off the BBs, gets into the nucleus, and encodes the expression of a protein, which then generates the immune response.

There are two cool things about this protocol. One is that it’s unbelievably fast. Since you never require a protein, we can just build these genes out of synthetic oligonucleotides and get it into a mouse within two days. We now have a protocol that allows the mouse to generate very good polyclonal antibodies within nine weeks, which is much shorter than a standard antibody protocol. Also, the antibodies we get out of this almost always recognize the native form of the protein, which is really not the case with standard immunization technologies. Frankly, we don’t have a clue as to why this is.

For proteomics technologies, that’s really fantastic because either we can use these antibodies as capture agents if we immobilize them, or if we capture a protein of interest in some other way, like with chemicals, we can use them as sandwich agents. One simple way to quantify how much of protein X you have is with a sandwich assay, where you come in with a second molecule that also binds protein X, but in a different place. If that sandwich molecule is itself labeled, that’s an indirect reflection of how much protein X you’ve captured.

There’s a second very important advantage of using that kind of assessment technology. There’s a lot of engineering and physics types that have developed cool ways to electronically or otherwise quantify these kind of binding events. Another way to do that is with surface plasmon resonance. But those kinds of strategies put an unreasonable specificity expectation on your capture agent — nothing is that good. So the cool thing about a sandwich assay is that you get two orthogonal binding events that have to occur in order to register a signal. Each one is going to have a little bit of sloppiness, but that sloppiness is not going to be the same. So you basically get the square or better of the specificity.

The bad news is that in terms of actually making devices that you can sell, no one’s thrilled about immobilizing antibodies on surfaces and then storing or shipping them. So I think everyone agrees that the solution is to use synthetic chemicals as capture agents, which don’t have to maintain a particular folded structure. The problem there is that no one knows how to make very high affinity chemicals that bind proteins. That’s the seminal problem that we’re trying to crack. V.2 will probably be chemical capture agents and antibody sandwich agents. Then v.3 will be chemicals entirely.

How are these efforts going?

I think we’ve fundamentally solved the problem. It’s a two-step process. Step one is, we start with a combinatorial library of chemicals and we screen that for binding to their protein. We get only modest affinity hits — we’re about three to four orders of magnitude away from what we need. So now you’re left with this problem of how to get three or four orders of magnitude in binding affinity.

If you’re a drug company you hire 3 million chemists to make every derivative known to mankind, and you eventually find something. We can’t do that. So we’ve developed two techniques. The idea here is, rather than trying to improve a compound in a stepwise fashion, we just live with the fact that it only has modest affinity. But then we try to find a second binding compound that also has modest affinity. We combine the two appropriately to get a high-affinity bidentate ligand — one that contacts the protein in two places. So basically, if you get two modest-affinity contacts that bind at the same time, the sum of that is a very high-affinity binder.

Doing that is not an original idea. But we developed two ways to find these kinds of ligands very rapidly. I can tell you about one, which is published in the Journal of the American Chemical Society. We find two modest affinity binding agents, and then co-immobilize them on the same feature of a capture array or bead. The idea is that, traditionally, people have screwed around to try to find an optimal linker. We thought that if we immobilized them at high density on a surface that we wouldn’t have to worry about that — that the surface itself would act as a library of linkers to present these two in all possible geometries. The other one I can’t tell you about because we’re getting the IP in line. But it’s even better.

Are you collaborating with any companies on this?

We’re doing this within the Center for Biomedical Inventions, a research center that myself and Stephen Johnston started about four years ago. It’s dedicated to developing technology. We fund it through grants — especially DARPA and NIH, and Southwestern gave us a fair amount to get started. We now have pretty serious resources to develop these things to a very late stage. We’ve already spun out two companies [from CBI], and our main goal [with proteomics technology] is to start new companies. I’m very hopeful that all this proteomics technology that we have will come out sometime next year, although I’m still not sure what form it will take.

Do you plan to then spin out another company?

Yes, unless we receive an incredibly attractive offer from some existing company that’s already interested in this. I’ve had four or five companies approach me.

How do you envision the future of proteomics?

I think that if proteomics is really going to have an impact on Joe Blow in Indianapolis, I’d like to do something that has a real-life impact. My colleagues and I have this vision that if we could develop these chips in a form that’s really cheap, we could put one of these units into at least every doctor’s office and maybe even your house, next to your toaster oven. The idea would be to sample your proteome on a daily basis, and use that as a diagnostic tool and be able to, in a pre-clinical fashion, catch when people are getting sick in various ways. I think we can do it — the trick is going to be how many things you have to measure, and how sophisticated the bioinformatics are going to have to be.