Skip to main content
Premium Trial:

Request an Annual Quote

PNNL Proteomic Researcher Uses Tobacco Fund to Build MS with Higher T-Put, Sensitivity

Premium

Richard Smith
Researcher
Pacific Northwest National Laboratory
Who: Richard Smith
 
Position: Director, NIH Research Resource for Integrative Proteomics, 2003 to present; Battelle Fellow, Pacific Northwest National Laboratory, 2001 to present; chief scientist, Environmental Molecular Sciences laboratory, PNNL, 1997 to present
 
Background: Senior staff scientist, PNNL, 1988 to 2001; PhD in physical chemistry, University of Utah, 1975
 
Last month, Washington state announced Richard Smith of the Pacific Northwest National Laboratory had been awarded $4.8 million to help develop a mass spectrometry-based technology for biomarker discovery.

The money came from the state’s portion of the Master Tobacco Settlement, reached between several states and tobacco firms.

 
Smith and about 20 others in his lab have spent five years developing the platform. With the funds, they plan to test it for liver cancer biomarker discovery.
 
ProteoMonitor spoke with Smith this week about his work and the technology. Below is an edited version of the conversation.
 

 
Describe the platform you’re developing.
 
The platform that we’re developing is based on separations that use liquid chromatography and ion mobility for separations and analysis by mass spectrometry. So in that sense, it’s a lot like existing LC-MS platforms ­– separation combined with mass spectrometry.
 
What’s different about it is the speed because of the use of ion mobility separations to accomplish separations faster, and the sensitivity. There are a number of advances related to the ionization and performance in the front end that allows us to put several orders of magnitude more ions through the instrument. That is tied in with the need for speed, creates the need for more ions, more signal in the analysis.
 
That combination of sensitivity improvement and throughput improvement, that’s the basis for the technology platform.
 
The speed comes from the fact that you can put more ions on the instrument?
 
And we have faster separations. Faster separations are the absolute requirement in just being able to deal with the complex mixtures we have in proteomics.
 
Is there a way of quantify what you mean when you say it is faster and more sensitive?
 
In the initial version of the platform, we’re aiming at approximately an order of magnitude increase in throughput, [so you can] go from 10 proteome analyses per day to 100 on one instrument.
 
And what about the sensitivity? 
 
In terms of sensitivity, the goals are very similar and look to be quite reasonable, increasing the depth of proteome coverage, the sensitivity of the analysis by an order of magnitude.
 
What stage are you at in developing this platform?
 
We’ve been working with a couple of prototypes. From that, we’ve learned quite a bit and we’re in the process of building the platform that we’ll be applying in this program.
 
How long have you spent developing this?
 
About five years at this point.
 
Who has been funding it?
 
We’ve had some funding from the National Cancer Institute. We’ve had some internal funding through the Department of Energy for different parts of this. Right now, we’ve got some internal support that is developing parts of the platform and the informatics that are associated.
 
There’s a significant need for enhancements to our informatics pipeline in order to deal with the different nature of the data because of the use of the ion mobility separation, but also the much higher throughput because we generate huge amounts of data really quickly. And changes have to be made in order to handle that.
 
So you’ve had to develop the bioinformatics capability because the existing commercial products can’t handle the amount of data?
 
We’re not dealing with the bioinformatics downstream analysis. What we’re focused on is what’s unique to our platform, and that’s just the initial stages of acquiring the data and handling it and getting it to the point where the conventional tools can be effectively applied.
 
We’re trying not to reinvent anything that already exists.
 
How did you arrive at deciding to develop this platform? Was it because you had to overcome bottlenecks in your own research?
 
We’ve been doing proteomics for about 10 years, and being at a large proteomics lab with several dozen instruments, platforms that are doing as high a throughput as we can, a number of our applications, and many, many more, obviously are clearly limited by what we can do in terms of throughput.
 
We’re involved in some fairly large applications right now in proteomics. Across the board, whether it’s in systems biology applications of proteomics or biomarker discovery applications of proteomics, we find that there’s a need and desire for significantly larger number of proteome analyses from individuals to be obtained.
 
I’ve heard that some people have said in regard to biomarker discovery in proteomics, ‘We don’t need new biomarkers. We have far more biomarkers already than we can work with.’
 
That is really a result of the way we approach it and the limitations that we have at present. If you take any cancer versus normal or whatever, and you analyze just two samples, you’ll see hundreds or thousands of differences between the two.
 
And the problem is, ‘What [differences are] really due to cancer, and what are the effects of biomarkers?’ Until you look at a significantly larger number of individuals, and you account for biological variation, you account for inflammation that you’re seeing a response to, different types of cancers, different disease states, you really can’t arrive at the real quality biomarker candidates that you would like to take to the next step.
 
We deal very often in our lab at present with this frustration of not being able to deal with the larger populations at the discovery stage. And I think that is generally the case in biomarker discovery at present.
 
So we see real need for much larger throughput, and that’s what we aim to accomplish, and that’s what we intend to do in this initial application of this program.
 
Can you give me a specific example of when this throughput limitation hindered your research? 
 
In this case, in the development of biomarkers for hepatitis C, HCV, the biostatisticians would like us to be looking at samples, in this case liver microbiopsy samples, from at least 500 individuals. In terms of the conventional way of doing things, if you look at a MudPIT analysis, that might be a day of instrument time, and if you talk about 500 samples, if you do any technical replicants, you’re talking about years of experiment time.
 
It quickly becomes either too expensive or impractical or very slow.
 
And if your platform works the way you hope, the way it does, how much of a reduction in time are we talking about?
 
The hope is that we can go two orders of magnitude faster than we do at present. We also take a very different approach than the MudPIT approach. We use an accurate mass and time tag approach, what we call an AMP tag approach, developed at our laboratory that doesn’t do shotgun proteomics. That’s allowed us to increase the throughput significantly in our lab, and this new platform still allows us to apply this approach and go faster.
 
So we expect … from a single platform to get to a point in the near future where we could do in the range of 100 proteome analyses per day. That gets us to a level where we can take on some significant applications.
 
Another one I’ll mention is we’re doing work with Desmond Smith at [the University of California, Los Angeles] looking at the mouse brain, looking at biomarker discovery related to Alzheimer’s and Parkinson’s and so on.
 
And in that work, Desmond’s lab voxelates the mouse brain, they basically cut up a mouse brain into 700 little cubes, each millimeter on a side. And we do proteome analyses on each of the cubes, and we get information on more than 1,000 proteins, and we can reconstruct low-resolution, three-dimensional images for each of the proteins, and look at their abundance and their variation across the brain.
 
That’s really informative as to how proteins vary in different regions in the brain. We’ve published some initial work and this is done actually with our existing technology, this 3D experiment already. It’s just much slower than we would like … we’ve done that now for a normal brain and now we want to compare quantitatively a Parkinson’s model, an Alzheimer’s model and other disease states to see the variation in protein abundances and where they occur in the mouse brain.
 
In increasing the speed and sensitivity, have you affected other metrics, such as mass-to-charge ratios?
 
We’ve done this in a way that preserved all the benefits of a mass spectrometry approach. We use a time-of-flight spectrometer that provides us very accurate mass measurements, and we have increased the dynamic range of the measurements by improving the sensitivity, the production of ions from the sample, and their transmission through the mass spectrometer.
 
It’s a lot of work. We built on a lot of technology that’s been developed in our lab. We’re at a national lab, and we have some great resources as far as the technology development, but this has been a significant effort. And this program that the state of Washington is funding is really mostly going to help tie some of the pieces together, and then apply it in a way to real health-related problems, and to really try to address this need to improve the biomarker discovery effort to a degree that allows us to come up with … a quality set of biomarker candidates that we can push to the next stage where they can actually be transitioned to some clinical application much faster, and hopefully with the quality that we expect. 
 
Are you going to be looking at other diseases aside from liver cancer?
 
Absolutely, that’s part of the program. The first one, HCV, is just essentially a proof of principle. We’d like it to be applied broadly. We also think the platform itself has potential clinical applications. Although conventionally, people think of much simpler approaches based on immunoassays and so on, we believe that development of this platform … may have some clinical role.
 
Have any mass spec vendors expressed any interest in this?
 
Not in the entire platform. I think that it’s a little too early for that. Frankly, this is an area where the IP issues are very hazy. There are a number of patents out there, and I think a lot of the vendors have been a little shy to step into it simply because of the complexity. And until there’s an obvious and significant payoff, I think there’s just some natural disinclination to step into it.
 
Who owns the IP, you or the Pacific Northwest National Laboratory? 
 
We own some pieces of the essential IP, but the ion mobility separations have been around for quite some time. Dave Clemmer at Indiana University has a patent that I think has some validity in this area. There are a number of pieces that cloud the picture a little bit. That makes it interesting and maybe has led to the lack of action by the vendors. I think they’re all watching with interest.

File Attachments