Skip to main content
Premium Trial:

Request an Annual Quote

Cornell Researchers Find Biomarker Panel in CSF Associated with Alzheimer's Disease

Premium

Kelvin Lee
associate professor of chemical engineering
Cornell University
Name: Kelvin Lee
 
Position: associate professor of chemical engineering, Cornell University 1997 to present.
 
Background: PhD, California Institute of Technology, 1995; postdoc, California Institute of Technology, 1995-1997; director, Cornell proteomics program, 2000-2005; director, Cornell Institute for Biotechnology and Life Sciences Technologies, 2005 to present; director, New York State Center for Life Science Enterprise, 2005 to present.
 

 
Kelvin Lee and colleagues at Cornell University and Weill Cornell Medical College recently identified a panel of 23 protein biomarkers in cerebrospinal fluid associated with Alzheimer’s disease by using proteomics technology, detailed image analysis, and computational and statistical analysis. They did this by comparing 2,000 CSF proteins from 34 patients with Alzheimer’s with 34 controls.
 
Those results were validated in a follow-up study with a group of 10 patients with suspected Alzheimer’s disease and 18 healthy and demented control subjects.
 
Some of the biomarkers included proteins associated with the binding and transport of beta-amyloid peptide in the senile plaques that clog the brains of Alzheimer’s patients. Other proteins and molecules were linked to inflammation and synaptic dysfunction.
 
Lee led the proteomic research team that identified the biomarkers. Their findings appear in the Dec. 12 online edition of Annals of Neurology.
 
ProteoMonitor spoke with Lee last week about the research.
 
 
How is your work different from all the other biomarker work being done on Alzheimer’s disease?
 
Right now a definite diagnosis of Alzheimer’s disease has to wait for postmortem examination, and so the best that a neurologist can do on a pre-mortem exam is to assign a diagnosis of probable Alzheimer’s disease. And it turns out that clinicians are incorrect nominally about 20 percent of the time, maybe 25 percent of the time. So one can argue there’s a need for better methods to assess people when they’re still alive about whether they have the disease or they don’t have the disease. And there are a number of different kinds of dementia that can be mistaken for being Alzheimer’s disease, and some of those are even treatable. So it’s important to correctly classify people as well as possible. What our test seems to do is improve the accuracy of that diagnosis to greater than 90 percent.
 
So there are a few things that I think we did a little bit differently that made this an interesting project for us. One big challenge in the Alzheimer’s community is — if you look at the biomarker papers — who the groups of patients are that people are studying. There’s a lot of work out there that looks at normal versus Alzheimer’s. And that’s an important first comparison, but one can argue that a physician doesn’t have that difficult a time assessing whether somebody is moderately Alzheimer’s versus normal. It’s important to have normal controls, but it’s more important to have neurologic controls or demented controls. Our study group does include a few normal controls, but it also includes a reasonable number of neurologic controls.
 
The more important and the harder question is on the Alzheimer’s group. A lot of the papers out there, in fact, I think all of the papers out there with one or two exceptions, when they talk about looking for Alzheimer’s samples from cerebral spinal fluid, they generally take one of two forms. One form is they can take cerebrospinal fluid from people after they’ve passed away. And the problem and the challenge there that we’ve found is that there are so many changes in the biochemistry in the blood-brain barrier immediately upon death that the spinal fluid protein composition after death is very different than it is before death. Ideally, you want a pre-mortem test.
 
The other category of papers is the one that takes Alzheimer’s patients where they collect the spinal fluid pre-mortem but they don’t actually have a confirmation of the diagnosis because the confirmation of the diagnosis can only occur upon death, and sometimes death comes five, 10, 15 years later. In many cases, patients may not have an autopsy, or they may have moved to another city or they changed physicians, and so that connection between the pre-mortem sample and the postmortem confirmation is lost.
 
What we were able to do is collect a number of pre-mortem Alzheimer’s disease spinal fluid samples but look at cases only where we had postmortem confirmation of the disease and compare that group to another group that comprised normal as well as neurologic controls.
 
I think we were very fortunate to find a very good collection of samples, and that’s something that sets us apart, I think, from other studies.
 
The second thing is we used a multivariate statistical technique that not too many people in the proteomics community have used yet, and it’s called the random forest method. As a lot of people will know, in proteomics experiments, you make a lot of measurements, you measure a lot of different proteins, and when you do these kinds of biomarker studies, the population that you’re studying is relatively small compared to the number of measurements. It would not be unusual to have 100 subjects but 2,000 proteins that you’re measuring, for example.
 
Those kinds of studies are called underspecified. What that means is when you try to apply multivariate statistical methods to figure out what are the statistically significant changes, a lot of those methods are not designed for underspecified systems. So if you apply them, you can basically find anything you want to find.
 
What this random forest approach was designed to do was to deal with these underspecified systems. In the biological context, it was first applied to microarray data, which is another kind of data set that is highly underspecified for these kinds of studies. And people have found that it did a great job of giving you not only the statistically significant changes, but it also gives you a measure for an error in your measurement, which, because of the nature of the technique … is kind of independent of everything else.
 
And because it’s designed for these underspecified systems, it’s also designed in a way where technical noise, as can happen in any experimental system, doesn’t have as much of an effect on the outcome as with other multivariate methods.
 
When we started applying this technology or method, we found that it also helped us tease out what the meaningful changes were. So that’s a second way that our study kind of differentiated itself.
 
And the last way is we did a validation study. And it was a very small validation study, and we didn’t find definite Alzheimer’s cases because I think we got our hands on all of the publicly available spinal fluid that met our acceptance criteria in the first study. But what happened is we had a follow-up study of 28 subjects, 10 with probable Alzheimer’s disease and 18 controls. And in that case we went through the same traditional proteomics methods. We pulled out the spots that we thought were the most interesting, and then we had the person doing the classification blinded to the diagnosis, and we found results that were very similar to the first cohort of 68 subjects which is the first group that we did.
 
Why did you choose to use CSF versus blood or urea or some other type of sample?
 
There are other tissues that one can argue are easier to obtain, but because it’s a disease of the central nervous system and of the brain, our approach has been to think about using spinal fluids to look for markers first and then anything that we find interesting, then to look at other tissues to see if that holds up.
 
The other reason is CSF is mostly water. Yes there are a lot of proteins in there, but a lot of the sample preparation challenges that people face in proteomics are less of a concern. It’s not that there are no concerns, but there’s less of a concern when you’re working with a fluid that’s 95 or 99 percent water.
 
Does the problem of high-abundance proteins that you find in blood not exist in CSF?
 
No, that challenge remains, the broad dynamic range, but more in terms of very hydrophobic proteins, hydrophilic proteins, and so on. Most everything in [CSF] is pretty soluble.
 
Looking down the road, how feasible is it to develop a diagnostic test using CSF rather than some kind of tissue that’s more easily obtained?
 
I think it’d be very feasible to develop diagnostics based on CSF. There are other neurodegenerative diseases where doing a spinal fluid test is a normal part of the diagnostic workup. Looking at preon disease in humans, we had done some work a number of years ago to identify a marker there and characterize that, and that’s now a normal part of the diagnosis for Creutzfeldt-Jakob disease, so I think that’s certainly feasible. One can argue it’s more invasive than doing a blood test or saliva, but in the hands of an experienced clinician, it’s not any more complicated than just drawing blood.
 
I think the greater issue is to develop more robust technologies than the normal ones that people think about as proteomics experiments, to develop robust immunoassay-based techniques to make these measurements so that they can be done reliably anywhere.
 
Can your research, at this point, determine where someone is in the Alzheimer’s disease stage, whether they’re in the incipient stage or somewhere more advanced?
 
We have unpublished preliminary observations that suggest to us that the markers do go beyond being able to tell you if you have the disease or not, and they actually seem to change in a way that mimics the clinical outcome. But we haven’t published that yet. 
 
Do these biomarker results have therapeutic as well diagnostic potential?
 
It would be inappropriate to say they have therapeutic potential by themselves. We’re looking at that. We do think that they do have the ability to impact the development of new therapies, though, because as drug companies identify new molecules and new interventions, when they do their clinical trials, they need to better assess who has the disease and who doesn’t have the disease, so our markers are useful that way. And we are using them to assess one particular experimental treatment that’s out there, and [are finding] that the markers do a very good job of giving us hints about what’s going on in that therapy.
 
 
Are you familiar with the work that Jing Zhang and his colleagues at the University of Washington are doing? It sounds in some ways that your work is similar.
 
Well, it’s complementary in some ways, I think. They did a shotgun strategy as opposed to [our] gel-based strategy, if I remember right. But I do know that they do very good work, so they’ve been trying to find a good set of samples and took care and time to do that, and they’ve used a shotgun approach, which I think is also very good. I can’t remember [how big their sample size was], how many different subjects they had, but I do know in preliminary observations from the Alzheimer’s meeting in Spain this past summer, that there’s some significant overlap in the names of some of the proteins.
 
Not everything is the same, of course, but there are a number of things that we found that also appear on their list. In a sense that’s a very reassuring finding. It’s complementary in the sense that if two groups using different techniques find similar answers, that’s actually the best possible thing as a scientist.
 
 
What’s next?
 
There are three things. One, there needs to be some independent confirmation and a greater number of subjects have to be studied using these markers to confirm things, because in clinical studies like this, the more subjects you have and the more laboratories you have making these measurements, the more comfortable people will feel with the data.
 
The second is, one can never rely on the current proteomics technologies, either shotgun or gel-based, to do a diagnosis. You have to develop simpler methods like immunoassays, and so we’ve been working pretty actively on that.
 
And a third is for us to look at [this] in the context of treatments and look at how the markers behave and change as people are getting treated and in some cases, if they’re getting better or if they’re not getting better, look at how those markers change and make those correlations.

File Attachments