Skip to main content
Premium Trial:

Request an Annual Quote

Neil Kelleher on Striving to Bring Top-Down to the Proteomics Masses

Premium

At A Glance

Name: Neil Kelleher

Position: Assistant professor of chemistry, University of Illinois at Urbana-Champaign, since 1999.

Background: Post-doc in enzymology, Harvard Medical School, 1997-99.

PhD in bioanalytical chemistry with Fred McLafferty and Tadhg Begley, 1997.

Fulbright scholar in organic synthesis, University of Konstanz, Germany, 1992-93.

BS in chemistry and BA in German, Pacific Lutheran University, Tacoma, Wash., 1992.

 

How did you first get involved in proteomics?

It was through being a graduate student with Fred McLafferty and Tadhg Begley at Cornell. It was protein analysis strategies, and of course using the high mass MS/MS capabilities that Fred had devised — taking electrospray and FT-MS and putting it together. I was working on vitamin biosynthesis in E. coli.

What drew you to the top-down strategy and why do you feel so strongly about it?

Really it stems from some of the thoughts I had when I first started thinking about intact proteins and mixtures of related forms of intact proteins, and just the differences if you directly purify and then fragment related species in a top-down fashion, versus the proteolytic method. So it was pretty clear that we were well-positioned, and there was a real sort of informatics argument about heterogeneity of proteins, [so] reading about signal transduction and things, the pieces fell together. Then the rest just becomes engineering — if you believe in the approach, and that there is in fact a unique advantage, then you dedicate yourself to that.

What would you say are limitations of top-down?

They are as many as its advantages. Dynamic range, which the entire field of proteomics faces and continues to have problems with, is an immense challenge. That’s exacerbated for top-down at high mass. I would say the most severe limitation is, ‘How do you do 100 kDal protein?’ There just needs to be a few more advances.

What sorts of advances are you working on?

Fundamental advances in electrospray ionization — I think that’s a major requirement. Some of those can be solved through not necessarily stunningly brilliant advances, but just big muscle, [like] large magnetic fields for Fourier-transform mass spectrometry. And I think better ion optics [and a] new generation of mass spectrometers — there’s definitely movement on that front as well. [There is also the issue of] samples — how do you get a protein sample clean? I mean, really clean? This is less of a problem at the peptide level.

Are there instances where your lab prefers bottom-up?

We study the enzymology of 300 kDa proteins by looking at their covalent intermediates. So of course there we employ digestions. In a proteomics context, there are so many other labs doing so much great work that we don’t do that.

Tell me a little about quadrupole FT-MS that you built and work on.

It’s of the [Alan] Marshall design, and it’s worth anywhere between 30- and 70-fold in terms of either dynamic range or speed of data acquisition versus older-style FT-MS instruments. There [are] commercial versions of this, and there [are] now higher space-charge ion traps coupled with FT, so I think those instruments have great capacity to have more and more people do top-down protein analysis, if not top-down proteomics.

[The quadrupole] allows you to enhance the signals of particular components of the mixture. And for intact proteins of course, your signals tend to be lower than for some small peptides. So particularly in a mixture, just like peptides, there will inevitably be higher abundance components and lower abundance components. You’d like to get identifications on as many as possible. The quadrupole hybrid allows you to pre-filter, selectively accumulate low abundance species, and get [them] of sufficiently higher abundance to get identification and characterization.

What is your involvement with HUPO, if any?

Alas, little — I’m on a sub-committee. Top-down has not been embraced as part of the long-term future of large endeavors like HUPO. This is [because it’s] nascent. There are two papers we published recently — one on 70 proteins, [and] one on 130 yeast proteins. But we’re the first lab to show that. I think there’s just not a lot of faith that it will ever be a high-throughput, robust method. And I couldn’t disagree more strongly.

Why are you so confident that top-down will eventually be more high-throughput and robust?

I think a lot of people are seeing the benefits of top-down protein analysis. But that’s if you have a simpler case where you just want to characterize a protein efficiently. In a proteomics context, I think the more people that start to focus on the barriers to why top-down cannot be achieved on a larger scale and for higher mass proteins, the faster those issues will be ameliorated.

It’s slow, but it’s coming. Even just the semantic argument that people use — the phrases top-down and bottom-up — these are now widespread. I think that’s a victory of sorts for top-down. There were contributions from many labs that have now brought us to the state-of-the-art in bottom-up technologies, and the number of people working on top-down has just been so small.

So you see it as just a matter of more effort?

Sure — with the knowledge that there needs to be some sort of fundamental advance to allow top-down to really function on a proteomics scale. Incrementally, it will get to the point of high-throughput.

I think there hasn’t been a whole lot of effort [among vendors] because the market is so small. But I think that will change with time.

What sorts of special requirements need to be developed for software used with top-down approaches?

We’ve been very active in that area — probably that’s been our most proactive area of progress. There’s a whole database strategy that we call shotgun annotation that is particularly beneficial for streamlining the identification and the characterization process. It’s a website called ProSight PTM. There are about 80 external users.

Tell me about shotgun annotation.

If you look at the informatics of top-down, using high mass accuracy, the specificity is incredible. If you get a search that has a probability score of 10-10, you don’t have to manually validate that identification — I mean, that’s the protein. And now you can worry about characterization. So the philosophy behind shotgun annotation is, let’s put all sorts of biological variability and bioinformatic imprecision that creates multiple possibilities from every gene into the database. Those possibilities can be from polymorphisms, or the variability can be from known modifications that could or could not be on the protein. So all of this knowledge, from diverse sources, is shoved into the database. And the result often is that you are able to both identify and characterize in parallel on human proteins.

Do you think that the cost of having high mass-accuracy machines is a limiting factor in the spread of top-down?

Sure, [but] mow the cost is coming down, [and] the level of engineering is going up — if you look at sales of FT-MS instruments, they’ve gone up considerably. And now there are $700,000 to $800,000 options and companies are buying two to three of them.

It might be many years before you can have the same level tech running a GC-MS [as] an ion trap-FT hybrid. But the gap is going to close pretty soon — in the next few years.

Are you working on continued improvements to your software system right now?

Sure. I’m very eager to get statistics out to the community that shows, ‘What is the throughput for top-down? What [are] the database retrieval requirements?’ Because I think once we get it out to the community, I think there will be an appreciation for the efficiency.

It’s efficient in that you don’t have to find all the individual peptides?

And then recompiling Humpty Dumpty — it’s hard to put them all back together.

Are you looking to partner with any companies on this?

That’s ongoing and under negotiations with a couple different organizations.

What else is coming up in top-down?

LC-MS is coming for top-down. [Also], I think that for the biomarker space, there is a role for using top-down mass spectrometry there, both for protein profiling, and for identification of protein-based biomarkers. If the biomarker you’re interested in is a specific form of the protein, then digestion of a sample obscures that information. For people struggling to understand the top-down bottom-up complementarity, [look at] histone biology. Histones are small proteins and they’re multiply modified, and that’s where the beauty of top-down can be appreciated — when you have multiple modifications that form some type of logic.

Are you working on biomarkers at all?

We have an active project in that area — it’s a collaboration with a major pharma company. [It’s for biomarkers of drug efficacy] in late-stage research or pre- or early- clinical trials.

 

The Scan

Genetic Tests Lead to Potential Prognostic Variants in Dutch Children With Dilated Cardiomyopathy

Researchers in Circulation: Genomic and Precision Medicine found that the presence of pathogenic or likely pathogenic variants was linked to increased risk of death and poorer outcomes in children with pediatric dilated cardiomyopathy.

Fragile X Syndrome Mutations Found With Comprehensive Testing Method

Researchers in Clinical Chemistry found fragile X syndrome expansions and other FMR1 mutations with ties to the intellectual disability condition using a long-range PCR and long-read sequencing approach.

Team Presents Strategy for Speedy Species Detection in Metagenomic Sequence Data

A computational approach presented in PLOS Computational Biology produced fewer false-positive species identifications in simulated and authentic metagenomic sequences.

Genetic Risk Factors for Hypertension Can Help Identify Those at Risk for Cardiovascular Disease

Genetically predicted high blood pressure risk is also associated with increased cardiovascular disease risk, a new JAMA Cardiology study says.