Skip to main content
Premium Trial:

Request an Annual Quote

David Nolte on CD Proteomics and Measuring Proteins With a Discman


At A Glance

Name: David Nolte

Position: Professor of physics, Purdue University, since 1989.

Background: Post-doc in optical materials, AT&T Bell Labs, 1988-89.

PhD in solid-state physics, University of California, Berkeley, 1988.

BA in physics, Cornell University, 1981.


How did you get involved with proteomics?

Back in 1999, a biochemistry professor who was in on one of the early stages of proteomics, Fred Regnier, came over to the physics department for a luncheon, and the purpose was essentially to challenge a group of us. He said, ‘you guys are physicists, you’re supposed to be really good at measuring lots and lots of very small things,’ and then he explained proteomics to us. He basically threw down the gauntlet, and said ‘prove it.’

I was working on a project at the time involving what’s called vertical cavity surface-emitting lasers. I’m actually a semiconductor physicist by training, and a vertical cavity laser is a semiconducting laser. They have an advantage that you can get really high densities on a single chip. So I was initially thinking in terms of lasers to try to do something, and that, I thought, ultimately was kind of a bad idea. But then I remembered that another thing that has really high density, and that is used for detecting lots of lots of little things, are the pits on a CD. They’re all really small, and a single data CD that you use on your computer will have something on the order of about 5 billion pits on a single CD. [Then I thought] of the pits as test tubes, and the test tubes are ways of doing chemistry and looking for proteins, and then I ran back to Fred Regnier, and we talked a little bit more about it, and that sort of launched this whole project.

So at what point in the project are you now?

There’s a vision of how this could be used and what it might be used for, and then there’s where we are now. And there’s a pretty big distance between the two. Where we are now is this: We’re taking the approach of using antibodies as the recognition proteins at the moment, and we have hit quantitative numbers on three properties: We have a detection sensitivity of 10 ng per milliliter — and that’s a clinically significant concentration, because that’s getting toward the levels of PSA in blood. Our selectivity is something greater than 10,000. And then the third number is dynamic range: We’re working at a dynamic range of 50-to-1 currently.

How do you get around the problems with density on chips, such as cross-reactivity?

That’s the selectivity. Because if you put down antibody A in track A and antibody B in track B, and then you expose it to a multi-analyte sample, you don’t want antigen B to be binding or partially binding to track A. The immobilization that we use for the antibodies is a thiol chemistry on gold. That form of immobilization seems to give us really good preservation of the antibody function, and so that’s, I think, what’s giving us the 10,000 selectivity. So that means that our cross-talk is less than 10-4 between channels.

We haven’t really done multi-analyte [experiments yet] — the most done to date is [using] two analytes. But with the non-specific binding experiments that we’ve done, we anticipate we could look for hundreds, if not 1,000, different proteins of comparable concentration without cross-talk. Where that does start to fall down a little bit is in the sense that in the blood, you have this really large range of concentrations. And so there are certain limits of how trace we can get. But those are all good, strong numbers. And we’re very fast.

I do want to point out that there are antibody arrays out there [that are] kind of looking for the same kind of applications we are, but we’re using what’s called direct detection. Most of the other technologies out there use fluorescence, and fluorescence is very inefficient, very indirect. Very roughly, in fluorescence, for every 1,000 photons in to excite, you’re maybe getting one photon out that you actually detect. In our approach, every photon interacts with the material, and comes out and is detected. So we’re essentially running at close to 100 percent efficiency. That makes us roughly 1,000 times faster than fluorescence detection.

So how does the detection work — do you literally run the disc through a CD player?

No. Quite a few people have thought that we actually use CDs. And we don’t, and for a really good reason: When you play a music CD, you can put lots and lots of fingerprints on it by picking it up wrong, and it will still play music and you won’t notice any difference in the music that you hear. [That’s because] CDs are designed to be insensitive to biological layers. We fabricate CDs to be true analog devices, rather than digital. Digital would be very insensitive to biological molecules, but we fabricate the CDs in a way that makes them extremely sensitive to very small amounts of biological molecules. Analog just means that you get a continuous readout, which means that we can read concentration. So if you double your concentration of protein, we get twice as much signal out. It’s actually a sensitive measurement of the concentration of antigen [present].

In a digital CD, there’s a thing called a pit, and that’s illuminated by a laser beam. The pit is recessed in a very high reflectance layer. It’s actually [more of] a ridge. The height of the ridge in a digital CD is a quarter of the wavelength of light. What that does is gives you destructive interference in the reflection. So when you’re on top of this ridge, you get roughly zero reflectance back and call that zero. When you’re off the high reflectance part — that’s called the land — you get 100 percent back, and call that a one. That’s how you encode ones and zeros on a disc.

So that’s digital, and it’s very insensitive — molecules or fingerprints almost don’t change anything. You still get almost 100 percent on the land and almost 0 percent on the pit. What we did sounds trivial, but it makes all the difference: Rather than making the ridge one-quarter wavelength in height, if we make it one-eighth wavelength in height, then it becomes exquisitely sensitive to very low numbers of molecules that you put on the ridge. So all we’re doing is taking the basic CD structure and reducing the height of these ridges by just a factor of two. That turns it from a digital device that’s insensitive, into a device that’s exquisitely sensitive to proteins that are immobilized selectively on the ridges.

And the readout?

The readout is just linear — so rather than getting ones and zeros, now when you’re straddling the ridge, you’ll get back maybe 55 percent reflectance, or 56 or 57. That [reflectance] is directly proportional to the concentration of proteins that are sitting on the ridge. You print the antibodies on tracks, and the way we work right now is, a whole track is a single chemistry, and then different tracks have different antibodies. So we put the antibodies down — let’s say several different antibodies on several different tracks. And then we expose the entire disc to a single sample. And it’s spinning. So you essentially put a few drops in the center of the disc, and it just centrifuges over the surface of the disc.

So what’s the next step now that you’ve gotten it to this point?

I think the major challenge now is to go true multi-analyte. Let me give you another number. On a disc the size of an ordinary CD, there is room for between 1,000 and 10,000 different tracks. That’s because the laser light is focused to such a small spot size. So we really could have 10,000 tracks on a single disk. And roughly speaking, you’ve got 10,000 proteins in your blood. So we could, in principle, have a single CD assaying for all the proteins in your blood. That’s the potential. That’s the vision. That’s years out. But to do that, we really have to get to the stage where we’re putting down tens to hundreds of tracks on a single CD looking for tens to hundreds of different proteins. That’s when I think we start getting to these very large concentration differences in actual sera. That’s going to be one of the major challenges there.

We may need to push our selectivity. We may need selectivity closer to 100,000 to really be able to start to get good ability to do the true multi-analyte. And if that goes well, then, I think, looking for actual disease markers in actual human sera would be the stage after that.

So an application would be biomarker discovery?

I think so. There are lots of different permutations of this. I was thinking mostly in terms of a diagnostic assay with a really high number of analytes. But other people have suggested other things. [For example], it’s possible that this could be a more general biosensor.

We can also look for things that aren’t even proteins — for example, if you do a competitive immunoassay approach, we can actually go after small haptens, and that can get into things like chemical warfare agents. We might be able to detect possibly drug or metabolic products in sera or things like that. We’d have to do it differently than what we’re doing right now, but I think it’s doable.

We also have a proposal into NIH right now to use [the CD] as a way of measuring dissociation coefficients and association coefficients in protein-protein interactions. We would actually be printing protein A into a track and then seeing how strongly it binds protein B, protein C, [etc.]. So that really gets into the protein network aspect as well.

Where do you get most of your funding now?

It’s all from NSF right now. We have two proposals into NIH, but they’re both pending.

Realistically, how many years down the line would you expect this on the market?

It’s all dependent on the funding. Funding is really tight right now. I’ve been a scientist for 15 years, and this is the lowest point I’ve seen in terms of funding availability. But if we could get significant funding for this, I would say that even then it would be three years minimum before we’d actually be looking at disease markers.

The other thing is, [there is a] whole technology track that would be maybe something that a small company or venture capital [firm] would need to get involved in — actually packaging this whole thing into something roughly the size of a Sony Discman.

Have you talked with any companies about collaborating?

A few. But this is really radical technology for most biotechnology companies. The few people who have come by the lab look at what we do and it’s just so alien to them that they just can’t see a match.

But that raises another point of why we think this is an interesting technology. Sony and Phillips develop[ed] the CD and creat[ed] this huge consumer market. CD players are manufactured for this mass market, and the cost per player to the original manufacturer is almost zero. The whole value in the market is the CD, not the player. So as a market structure, the idea is that you could give doctors these players for free — because they’d essentially just be modified Discmans — and then all the value would be in the discs. So it’s following a known market model. It’s very cheap compared to most of the other technologies — the antibody chips out there are very expensive. And there is one other competing technology, which is surface plasmon resonance. That’s also direct detection. But SPR systems [are expensive].

Now that you’ve been brought into biochemistry, are you going to do other things in this area, or go back to semiconductors?

I think semiconductors are a thing of the past. Right now, everything I do is one form of biotechnology or biomedical technology. I’m no longer a semiconductor physicist.

The Scan

Study Finds Sorghum Genetic Loci Influencing Composition, Function of Human Gut Microbes

Focusing on microbes found in the human gut microbiome, researchers in Nature Communications identified 10 sorghum loci that appear to influence the microbial taxa or microbial metabolite features.

Treatment Costs May Not Coincide With R&D Investment, Study Suggests

Researchers in JAMA Network Open did not find an association between ultimate treatment costs and investments in a drug when they analyzed available data on 60 approved drugs.

Sleep-Related Variants Show Low Penetrance in Large Population Analysis

A limited number of variants had documented sleep effects in an investigation in PLOS Genetics of 10 genes with reported sleep ties in nearly 192,000 participants in four population studies.

Researchers Develop Polygenic Risk Scores for Dozens of Disease-Related Exposures

With genetic data from two large population cohorts and summary statistics from prior genome-wide association studies, researchers came up with 27 exposure polygenic risk scores in the American Journal of Human Genetics.