At A Glance:
NAME: Ronald Beavis
POSITION: Adjunct professor and senior fellow, Institute for Biophysical Dynamics, University of Chicago, since 2002
PRIOR EXPERIENCE: PhD, University of Manitoba, Canada, 1987. Worked with Ken Standing on ion sources for peptide analysis by mass spectrometry.
- Postdoc, Technical University, Munich, Germany. Worked with Edward Schlag on gas phase spectroscopy of peptides.
- Assistant professor, Rockefeller University, 1989
- Assistant professor, physics, Memorial University, Canada, 1993-95
- Associate professor, New York University, 1995-97
- Senior research scientist, Eli Lilly, 1997-99
- CEO, ProteoMetrics, Winnipeg, Canada, 1999-2002
So you and Brian Chait built the first MALDI in North America in 1989? How did that differ from the very first design?
It differed from the very first design in that it was much simpler. The instrument that Michael [Karas] and Franz [Hillenkamp] had used was originally designed to be a laser microprobe instrument. Everything about it was designed so that you could retain as much positional information as possible. Ours was a much simpler machine; the laser was focused on a much wider area; and it was much easier to measure the fundamental properties of what was going on.
Did that inspire companies to build MALDI instruments commercially?
Vestec, the company that Marvin Vestal had started around the thermospray source, licensed the patent from Rockefeller almost immediately and started to build machines. Vestec was then purchased by PerSeptive, which was then purchased by ABI, and that’s what’s turned into the Voyager. Essentially, the same design was used in what eventually became the Hewlett Packard time-of-flight MALDI machine.
The Bruker [instrument] used a slightly different design, which was modeled on the types of instruments I used when I was in Munich [as a postdoc]. A particular sort of reflectron-based machine that had been designed in Ed Schlag’s lab long before I got there was the prototype for what Bruker used, with a different sort of analyzer design, but really very similar ion source.
The design [of the Amersham Biosciences MALDI] was made by a group of people in England who left Kratos to start their own company, Scientific Analysis Instruments. Their ion source is very much like the one we had at Rockefeller, but the analyzer is quite different because it uses a design for a special sort of ion reflector.
The source we came up with at Rockefeller was really favored by commercial companies, because it was much easier to manufacture than the source that Franz and Michael originally used to observe the effect.
Do you see room for improvement for MALDI mass spectrometry?
Yes, I see considerable improvement, both in MALDI and electrospray. Hopefully there is somebody out there working on a new ion source that will be a lot more quantitative than the current sources are, which is the main problem. In both cases they are very dependent on the physical properties of the peptide and protein. It would be nice to have an ion source that wasn’t dependent on the primary structure of the protein or peptide. Possibly something like the electron capture work that Fred McLafferty has been working on that has to do with fragmentation in FTICR machines. [What he published a year ago] wasn’t directly an ion source; it was for producing fragmentation, but I can’t see any reason why a similar sort of scheme couldn’t be applied earlier on in the ion source and actually generate ions.
What did you do after leaving Rockefeller?
I moved back to Canada, to Memorial University in Newfoundland on the far East coast, mainly because my wife’s brother lived there. They have quite a good physics department that is interested in what’s called soft condensed matter physics, and I did some more work on MALDI basic processes there. Then after three years I moved to New York University Medical School, where I did a lot of applications work with both mass spectrometry and protein chemistry. I spent a few years at Eli Lilly in Indianapolis after that, running a group that did analytical work associated with their protein drug development division. In 1999 I left Lilly and started working for ProteoMetrics, which my friend David Fenyö had started back in 1997. I am a cofounder, but I only started working for them that year.
Is it true you wrote Sonar, PAWS, and M/Z?
Yes. The first version of M/Z I wrote back in 1991.
What do you think about the various database and data standardization initiatives?
I think the data standardization is an enormous problem. It will be difficult to get that to happen, though, mainly because each one of the instrumentation vendors on the mass spectrometry side has a commercial interest in maintaining their own software and data that can’t be read by other peoples’ software. There have been several attempts in the past to standardize data collection formats, and none of them have been terribly successful. It is possible that this particular user community — if they can stay united and bring the instrument manufacturers into the process — may be able to influence them to at least produce some sort of common output.
Do you see a need for a public proteomics database, similar to the genome databases?
Yes. Depending on the genome, somewhere between about 20 to 40 percent of the predicted genes have no known function. However, if you work in [a proteomics] laboratory, you do find a significant amount of identifications associated with these hypothetical genes. There has got to be some public place where you can put this information, which is very difficult to publish in the peer-reviewed literature because you don’t really have a story you can build around it. I see the immediate value of such a database as bringing together all the information that people are generating about these genes. I don’t think there is a public project going yet, but I certainly hope there will be, and hopefully it’s a project that both North America and Europe get together on.
What do you do at the University of Chicago now?
I am a senior fellow at the Institute for Biophysical Dynamics, and I will probably remain here for a year or two, mainly doing theoretical work. I try to understand the statistical basis of protein identifications and assigning appropriate confidence intervals and how to use this information properly to aid in the curation of genes and gene products. The main deficit in proteomics at the moment is, there is not a standardized statistical framework for understanding the results that you get out. This does make it difficult in a proteomics company to make good business decisions based on the experimental results. And this is something I saw over and over again when I was with ProteoMetrics. It’s something I have been working on since the early ‘90s, in various incarnations. It’s part of a long-term research program on my part.
Are you planning to commercialize your solutions?
I am commercializing some of it through a partner but would like to put quite a bit of it into the public domain as well. Several open-source projects should be finished somewhere around July or August. These are specialized search engines, both for doing protein identifications and some things that are fairly challenging from a computational point of view, like trying to figure out disulfide bonding networks and trying to utilize large numbers of mass spectra to answer very specific questions. For example, if you have collected a large volume of mass spectra from a particular organism, at the moment, you can’t use any software to query those and ask a question which you might not have anticipated when the original work was done, such as ‘is there any evidence that a particular residue is phosphorylated or glycosylated?’ It’s an issue that has to be addressed as we build up these larger databases of mass spectra. Rather than using the information once, we should be able to go back at it and use it as often as we want.
Do you see a problem with infringing patents, e.g. the Sequest patent?
It’s only a problem if you don’t understand the Sequest patent. Since I was in the business and wrote MS/MS search engines commercially, I am very familiar with what it covers and doesn’t cover. I am fairly confident that there are approaches to doing this sort of database search which don’t infringe.
Where do you see proteomics going?
The thing to look for over the next few years is the emergence of a second wave of proteomics companies that are much more tightly focused than the first set of proteomics companies. There are a lot of people now, both on the business side and on the scientific side, who [have] had exposure to the technology. They understand it, they know what it can do. The next round of companies will be able to put together business models that more closely match the priorities of the larger companies they want to partner with. It means applying the technology in different areas that are much more tightly focused and associated with specific therapeutics.
[Existing proteomics companies] have been hoping that if you put enough data together, there would be some sort of emergent property that would come out of it that would help you go forward with producing either drug targets or therapies. But I think that approach, from a business point of view, is just too ill-defined to catch the imagination of people who are working in the pharmaceutical industry and who understand how long it takes for these things to go forward, and how much money you have to spend.