At A Glance
Name: Fred Regnier
Position: Scientific director for analytical systems, proteomics and metabolomics for the Bindley Biosciences Center, Purdue University, since 2003. Professor and assistant professor of analytical chemistry since 1969.
Background: Postdoc, Harvard, 1969.
Postdoc, University of Chicago, 1967.
PhD, Oklahoma State University, 1965.
What is your background in terms of nanotechnology and protein research?
I started out as an analytical chemist, and most of my career actually has been involved in analytical chemistry of biochemicals, primarily proteins. We started 30 years ago making separation media for proteins. In 1988, we started a company called PerSeptive Biosystems which is now Applied Biosystems that made the separation media. I went through that era for a decade in which we did that. Beginning in 1990, we started the whole lab-on-a-chip type stuff. We had some patents at the time about how to do chemistry and analytical chemistry on sub-nanoliter quantities of liquid. The whole lab-on-a-chip type stuff carried over into 1998, 1997, when we began to do proteomics and we began to do micro-separation systems where we would make whole chromatographic systems, like an HPLC, a whole HPLC that had a volume of maybe 10 or 15 nanoliters. We microfabricated the entire chromatography column, including the particles, everything, on a chip.
What happened at PerSeptive Biosystems?
What happened was we had discovered how to make particles small, and how to make it go faster. During the decade from ‘74 to ‘84 there were all these major changes that happened on the analytical side of protein separations. By the mid ‘80’s, we learned how to do it on a process scale. In 1988, when we started PerSeptive Biosystems, it was intended to be just a process separation company to do large-scale purification of recombinant proteins. And then we eventually got involved in all kinds of things in addition to process scale. Our company at one time made more MALDI instruments than everyone else put together. And those MALDI instruments then stayed with the company when ABI bought PerSeptive.
Can you describe some of the things you developed working on proteomics?
Our laboratory specialized in comparative proteomics based on quantitation with stable isotope coding. And so we had done that since 1998 actually. The ICAT and what we call GIST, for Global Internal Standard Technology, both started independently in Ruedi Aebersold’s laboratory and mine. Our publications hit within several months of each other and they just did different things. The ICAT method targeted cysteine, where ours targeted all peptides. For that reason we called it a global method. And now lots of people do global labeling. That was one thing we did.
Another thing we were very much involved in from the very early days was affinity selection methods to simplify the proteome. In other words, instead of doing global analysis of the proteome all the time, we focused on targeted analysis, and we would target particular kinds of chemical features such as glycosylation. We have spent lots of years looking at glycosylation. Phosphorylation too. And another one is oxidative stress — oxidation of proteins as you would have in aging, for example. We have projects on aging now, and oxidative stress is involved in a lot of diseases, but one of them is aging. Proteins that are damaged with oxidation have an aldehyde or a ketone group on them, and then we derivatized that aldehyde or ketone with biotin, and then turned around and pulled out only the oxidized proteins with avidin.
Can you describe how you developed the methods for targeted analysis of proteins?
Because I had been in chromatography of proteins since the ‘70’s, I was very familiar with all kinds of affinity chromatography and large numbers of different modes of separating proteins. What occurred to us when proteomics started was that all of these methods would be of great value in proteomics. And actually, we didn’t do anything new. All we did was turn around and apply all the techniques that had been discovered in the ‘80’s and just reapply them to proteomics. And it is our contention that there has not been a lot done in terms of new separation science in proteomics. It came naturally to us because of 20 to 30 years in protein separation. It was just very obvious to us how you would do that. What’s the only difference between today and in the past? Before the methods had been used to purify a specific compound that had certain structural features. In proteomics, we used the methods to separate a broad class of molecules that had a certain type of structural feature. And that was it — it was not anything very different.
At that point were you also working on nanotechnology?
Yes we were. Our whole idea was how you would combine it all together — how you would combine proteomics and all these things together. The interesting thing about proteomics in our mind — our thought was although proteomics is a wonderful thing for discovery and to understand diseases and all sorts of things like that, we thought that ultimately there would be a clinical aspect to proteomics in which you would want to look at large numbers of proteins in a clinical environment. And that the most logical way to do that would be with some sort of a disposable microdevice. In other words, you would go in for a blood test at your doctors’ office, and they would measure hundreds of different kinds of proteins on a drop of blood that would give some sort of an indicator of your health, and when that test was finished in 15 minutes, you could throw the chip away.
What kinds of microdevices are you working on now?
One of the things that we do that’s about nanotechnology that we’re excited about is we basically take the equivalent of a Sony walkman and make it into a proteomics machine. What happens in a Sony walkman, when you read a music disc is it has little pits on it. So there are pits that go by and the light detects that a pit went by, and then you get a digital signal out of that. What we do is take the pit and make a post out of it. And spin the disc at 6,000 rpms. You can detect in a microsecond, as these little posts go by — you can detect how high the post is. So then what we do is put an antibody on top of that. Each post has its own antibody. And then as an antigen binds to the antibody, it changes the height of the post. So now what you’re able to do is to spin a disc like this and read off large numbers of posts. Theoretically, it would be possible to do 10,000 different immunological assays on 10,000 different proteins. It’s just a different way to do so-called protein chips, and it is being done by David Nolte in [the physics department] and I. We are forming a company that will do that technology.
When did you start developing the disc?
A couple of years ago. We’re primarily perfecting the detection technology, the so-called reader if you will. The way we read the disc is the unique part of it. You use a specific antibody for whatever analyte you’re trying to measure.
Are you working on any other devices in terms of nanotechnology?
We’ve worked on small devices that go into all kinds of analytical instruments, including mass spectrometers. They’re actually more micro than nano-technology. All of our work then is actually micro. What we’re doing then is we’re looking at making microdevices that prefractionate protein mixtures before you put them into a mass spectrometer. The reason for that is you want to look at small samples and you have to have a device that somehow matches the sample size. And we look at doing a lot of that on chips. But that is a pure research application of microdevices, as opposed to the more far-reaching, we think, microdevices that will be used in clinical chemistry.
What are you working on for the future?
I have grants to look at aging — oxidative stress and aging. And one of the other things we’re very, very interested in is glycoproteomics in cancer.