At A Glance
Name: Dobrin Nedelkov
Position: Director of research and technology development, Intrinsic Bioprobes, since 1999.
Background: Postdoc, Ken Williams’ laboratory, department of molecular biophysics and biochemistry, Yale University, 1998-1999.
PhD in biochemistry from Arizona State University, 1997.
You joined Intrinsic Bioprobes in 1999. What work did you do before joining Intrinsic Bioprobes?
I did my undergraduate in Macedonia, I’m from Macedonia. That was a bachelor’s in biotechnology and engineering. I graduated in 1993. I joined the PhD program at Arizona State University in the group of Allan Bieber, and I did a lot of protein biochemistry. That’s where I first got in touch with the latest proteomic technologies, including mass spectrometry and a lot of separation techniques like capillary electrophoresis, HPLC, some NMR structural studies on protein toxins. Later on, I graduated in 1997 and went to do a postdoc in Ken Williams’ lab at Yale University. And that’s where I got into the hard core proteomics technologies like 2D gels, mass spectrometry, and so on. And I still worked on proteins a lot, structural characterization mostly.
Then I got a call from Randy Nelson in 1999, who I met before at ASU — he was a visiting professor there — he had started a company and he called me to join him. I visited, I liked what I saw, the things that were happening at IBI, and ever since then at IBI. IBI was founded in 1996, but I’ve known Randy since my early PhD days at Arizona State University.
What significant discoveries did you made during your postdoc or PhD?
Well, working with proteins is not an easy thing — that’s my biggest discovery. It was mostly structural studies and development of some proteomic approaches, but it got to the point where it became pretty standard and I needed a bigger challenge. IBI offered that, so I got involved in proteomic technology development at IBI, and my biggest achievement has been at IBI.
What were you doing when you first joined IBI?
I started with a technology called biomolecular interaction analysis mass spectrometry, or short BIA-MS. It’s essentially an integration of surface plasma resonance with mass spectrometry, and the company has just received a grant from the NCI to develop this technology. And I took a leading role with that. The project entailed coupling, creating a two-tiered technological approach where SPR is used for quantitative determination of proteins from solutions like plasma or other biological fluids. Once proteins are captured on the SPR chip and quantified, we take the chip and put it in the mass spectrometer and we obtain the masses of the proteins, so that’s the structural characterization. And then we incorporate enzymes from the chip, so we go from intact proteins to peptide fragments and then we confirm structural modifications or the protein ID. That’s a project that is still going on, but now it’s in a different stage.
After the initial stage I program, we have entered into a stage II program that we successfully completed last year. Now we’re looking forward to work on an SPR-mass spectrometry array with up to 100 spots, and essentially we want to make it a high-throughput technology, and apply it to the study of human beings.
What kind of an array is it?
The spots are affinity ligands, most likely antibodies. So, up to 100 antibodies can be arrayed on the surface, and solutions containing protein antigens passed over. After binding and quantification of the binding using SPR, the chip is put in the mass spectrometer and a read-out is obtained for the molecular masses of the proteins that are being analyzed. We’re hoping to do plasma proteins. They’re not going to be kinases — they’re going to be proteins circulating in plasma that can be used as biomarkers for specific diseases, from cancer biomarkers to heart disease biomarkers.
That project is in the early development stage, but we have a lot of technology that I’ve been involved in lately. One is called MASSAY. It essentially involves high-throughput characterization of proteins again, but it is not on a chip. Essentially it’s an affinity capture followed by mass spectrometry, helped by high-throughput robotics. It involves what we call affinity pipettes. These are small-volume pipetters that are derivatized with affinity ligands. We push a solution of interest through these affinity pipettes. The protein of interest is captured alone, or along with its binding partner, and then it is eluted onto a mass spectrometer target. And then we get an initial reading from the mass spectrometer with regards to the intact protein mass, and then we do digests on the target by using mass spectrometer-derivatized targets — this is another IBI patented technology — so we can get peptide maps of the proteins that we have pooled from the solution. Then we can assess structural modifications and try to correlate those modifications with disease states. That’s the state that we’re in right now. We’re applying the technology. We’re beyond development.
Is the technology out on the market?
It is, yes. We have placed a few systems with strategic partners, and the technology is available.
Is that the first affinity-capture pipette of its kind?
Yes. This goes back to 1995. This technology was pioneered by Randy while he was at ASU, and there are patents that go back to 1995 that essentially deal with affinity capture and mass spectrometry of proteins. We have just taken some time to work diligently on making the technology robust enough and reproducible in a fashion that we can apply it with high confidence to hundreds, if not thousands, of samples and to come up with specific correlations.
One of the projects we have started is a population-proteomics study. It’s about applying proteomics to the general population. Most researchers now are using proteomics to characterize proteins in one or two, or a limited number of samples, and come up with the biggest number of proteins that circulate in plasma. Well our approach is geared more toward applying our proteomic approaches to individuals for delineating changes that these individuals have in their proteome that can be used to diagnose disease or monitor therapy and so on. It’s more of a targeted proteomics approach. We take our approach and apply to thousands of people instead of using proteomics to analyze thousands of proteins at once. So you can classify this more towards diagnostics. And that’s why these devices that we have created are essentially consumables that can be utilized, rather simply [existing] in the diagnostics world.
In the population proteomics project, we took 25 proteins, antibodies for 25 proteins, and 96 samples representative of the general population of the United States. And we have analyzed those 25 proteins from 96 samples and come up with a number of structural variants and modifications of these proteins that are existent in the population. By doing this study, we are establishing the basal level of variations in the proteome that exist in the general population. By doing this, we’re not only establishing the basal level, we’re actually cataloguing all these modifications and the frequencies that exist in the general population. This is only a limited number of samples of course, but we’re expanding the samples now, the pool samples, and we expect to do this project with thousands of samples in a high-throughput mode.
Is this applicable to disease research?
Absolutely. We actually started doing this with two different projects — cancer and cardiovascular disease, where we have samples from well characterized and diagnosed individuals with various stages of disease, and we screen those samples along with age-matched, sex-matched controls for a number of proteins that we think could serve as potential biomarkers for the disease. And we’re looking for changes in both structural and quantitative [measures] in these disease samples that are different from the general profile in general population. In that way the approach can be used in biomarker discovery, and also in biomarker validation. If a biomarker is discovered by other means, we can screen for that biomarker in thousands of samples really fast, really quickly, and validate that marker — if indeed the changes are representative of the disease.
What other projects are you working on?
These are the two main projects that I’m in charge of. We’re still doing some technology development and improvement, but that’s not the major part. We’re also growing as a business, so that’s another issue that I’m taking part of. In terms of technology improvement, we’re working on essentially improving the limits of detection, the high throughput of the analysis, streamlining the whole process so it can be easily transferred from our lab to somebody else’s lab. The demand is for simplified devices and simplified platforms — that’s what we’re doing right now. We’re looking at ways to make this as simple as possible for the potential users, which ultimately will be clinicians and people working in the diagnostic world.
What kind of things are you working on for the future?
We’re really looking forward to starting several collaborations with labs that have access to well-qualified pools of samples — that’s one thing. The data is as good as the samples. So we’re applying our methods to a specific question. That’s one of the things. And then the other thing is the SPR-MS project which I think has a big potential because it’s a chip-based platform, and it’s very elegant, very efficient and very high throughput.