Skip to main content
Premium Trial:

Request an Annual Quote

Barnett Institute s Bill Hancock On Advancing Proteomics Technology

Premium

At A Glance:

Name: William Hancock

Age: 58

Position: Professor of Chemistry, Barnett Institute of Chemical and Biological Analysis and Department of Chemistry, Northeastern University, since 2002

Editor-In-Chief, Journal of Proteome Research, since 2002

Background: VP, proteomics development, Thermo Finnigan, 2000-2002

Principal scientist, HP Laboratories (later Agilent), 1994-2000

Various positions at Genentech, 1985-1994

Various academic positions, Massey University, New Zealand, 1972-1985

Postdoctoral fellow, Washington University School of Medicine, St. Louis, 1970-1971

PhD, Adelaide University, South Australia, 1970

BSc, Adelaide University, South Australia, 1966

 

How did you get into proteomics?

My training is in protein chemistry, with an equal mixture of organic chemistry and biochemistry. After getting my PhD from the University of Adelaide in Australia in organic chemistry, I did a postdoctorate in St. Louis, working on the chemical synthesis of a protein for biotechnology. After my postdoc, I went back to an academic position in New Zealand, where I continued with chemical synthesis of proteins by the Merrifield solid phase method. That got me into HPLC in the early days. We actually published the first protein separation by HPLC, and also the first peptide mapping study.

As an academic I was a visiting scientist at the FDA, and that led me to an opportunity at Genentech. I founded their analytical department and contributed to the approval of Genentech’s first biotech drugs: growth hormone, the heart attack drug TPA, and gamma interferon.

Then I went to Hewlett Packard when they started their life sciences initiative and got involved with their gene chips, and then got interested in proteomics when that came along. After HP, I went to Thermo Finnigan to set up their proteomics program.

I just left there, after two years. I started in November at the Barnett Institute at Northeastern University. I like to say that I am sort of closing the circle, coming back to academia. I have seen the government regulation side, I know the university side, the biotech side, and the instrument companies. I think you need all of it to put together proteomics and make it successful. My feeling was, this is a good time to come back to academia because there is a lot of basic research that needs to be done now.

What made you switch from a biotech company to an instrumentation company?

We were kind of frustrated at Genentech because we couldn’t get the instrument companies to produce the right sort of technology — largely in HPLC and mass spectrometry — that we needed to do our applications research and product development. It’s hard at a biotech or pharmaceutical company to do cutting-edge instrumentation development; you don’t have the team of engineers. The challenge at an instrument company, where you have the engineers, is that you don’t have the access to the latest in applications or biological problems. In either case, you try and do it with collaborations.

What was proteomics at Agilent about?

Really it was the beginnings of LC-MS at Agilent. That led them to doing a deal with Bruker for the ion trap. I was involved in the early parts of that, their LC-MS strategy. There was also the Caliper program, microfluidics, which was initially RNA profiling, but then more recently they came out with their protein chip. That was the beginning of my interest in proteomics.

The change from protein chemistry into proteomics of course really came with the sequencing of the human genome and John Yates coming up with his software for shotgun sequencing. However, I have always been more interested in the HPLC side of things.

What did you do at Finnigan?

The product that my proteomics program produced was the ProteomeX, the 2D-LC ion trap solution. The next product they are developing now is an application based on FT-MS. They are planning to introduce the FT-MS for proteomics in the relatively near future. The ProteomeX has actually been very successful, I know they sold a lot of systems. But accurate mass also has a role: that’s the application area the FT-MS will fit in.

What is your current research focused on?

Here at the Barnett Institute and at the department of chemistry, I am collaborating with Barry Karger and Massachusetts General Hospital to look at the proteomics of breast cancer.

This will continue my work at Thermo Finnigan, where we looked at a breast cancer model cell line, the SKBR3 cell line, and analyzed as few as 10,000 cells. We did what we call a direct cell analysis, where we took all the proteins in the cell, solubilized and digested them, and then used the LC-MS/MS approach to identify as many as possible. That is in press right now in the Journal of Proteomics. Previously, people have looked at 100,000 or 200,000 cells, which is too much for a biopsy, whereas 10,000 cells is getting closer to the number of cells you get with a biopsy.

Now we are gearing up with laser capture microdissection technology from Arcturus and monolith columns from Barry Karger’s lab, and we are starting to analyze individual patient biopsy samples.

What instruments do you use?

It’s a collaboration with Arcturus, using their laser capture microdissection instrument, and Thermo Finnigan, using their ion trap mass spectrometer, and then in the future we plan to use their FT-MS. Barry Karger collaborates with Applied Biosystems and is using their TOF-TOF system.

The entire institute actually has 14 different mass spectrometers, and I will be getting a couple more, so we will be up to 16. There is no one approach that covers all aspects of proteomics, so we will use a variety of approaches.

How many samples are you looking at?

Initially, we are just doing technology development, so we are just looking at a handful of samples. We will apply for government funding then to do a larger patient study and get some decent statistics. One of our initial goals is to improve the dynamic range of the proteomic measurement by, say, a factor of 1,000. If we are successful, then you will be getting a better part of the proteome.

How are you going to do that?

Barry Karger’s monolith columns will be helpful to improve the sensitivity, and I think the new FT-MS will also improve the sensitivity. The laser capture microdissection technology, getting a pure cell type, will help as well. And then the sample prep, removing some of the abundant proteins, capturing a subset of the proteomic mixture. We have to put all that together to get a factor of 1,000.

Currently, we are just looking at the tip of the proteome, but if we can see a larger part of the proteome, then we will probably have a better understanding of what’s going on, and therefore find better disease markers.

What is the biggest challenge in discovering biomarkers?

We all want to find biomarkers, but we need to understand the basic biology before we can race off and say ‘yes, we found the biomarker.’ That’s a big job, but fortunately, we have got a pretty large community, all working hard on that. I think the biggest stumbling block at this stage is to understand individual patient variability, and how that will affect finding good biomarkers that will be useful in general screening.

What do you plan to do later on?

Down the road, you do want to increase sample throughput. Let’s say we do come up with a system with greater dynamic range, but if that’s a research-oriented, high-tech, limited throughput system, that’s not going to allow you to look at 2,000 or 3,000 patients. What you have to do is try and integrate some of those functions on a microfluidic device to give you that throughput. I would say we are a ways off, but that would be where you want to end up. I think the microfluidics devices are not so successful yet because folks haven’t integrated enough functions onto a single device. You want to have sample prep, and separation, and ionization on that one device; you wouldn’t want it on separate devices, because then you are not really helping throughput.

What about protein arrays?

That’s another answer to trying to increase the throughput. After the basic biology, let’s say you come up with 100 good markers. You could put them down on a protein array and do that measurement, perhaps with mass spec as a readout. In the history of clinical chemistry, mass spectrometry has not been a widely accepted device, just because of its cost and complexity. But maybe down the road we will see that.

What would be the advantage of using mass spec?

For example, if you have an antibody, what does it bind to? The mass spectrometer can tell you that the antibody may be pulling down several related proteins, which will affect the clarity of your measurement. Also, the mass spectrometer can pick up splice variants, or glycosylation or phosphorylation variants.

Where do you see proteomics going?

The current phase is technology innovation, in the areas we have been talking about. We need to go deeper into the proteome, we need to have much better throughput and better quantitation, and we need to characterize the proteome of cells and important bodily fluids, such as plasma. Once we have done all that, then we can all get into the diagnostic and therapeutic areas.

The Scan

Not as High as Hoped

The Associated Press says initial results from a trial of CureVac's SARS-CoV-2 vaccine suggests low effectiveness in preventing COVID-19.

Finding Freshwater DNA

A new research project plans to use eDNA sampling to analyze freshwater rivers across the world, the Guardian reports.

Rise in Payments

Kaiser Health News investigates the rise of payments made by medical device companies to surgeons that could be in violation of anti-kickback laws.

Nature Papers Present Ginkgo Biloba Genome Assembly, Collection of Polygenic Indexes, More

In Nature this week: a nearly complete Ginkgo biloba genome assembly, polygenic indexes for dozens of phenotypes, and more.