Skip to main content
Premium Trial:

Request an Annual Quote

MD Anderson s Keith Baggerly on the Importance of Experimental Design in Proteomics

Premium
Keith Baggerly
Associate and assistant professor
The University of Texas MD Anderson Cancer Center

At A Glance

Name: Keith Baggerly

Position: Associate and assistant professor, department of biostatistics and applied mathematics, The University of Texas MD Anderson Cancer Center, since 2000.

Background: Assistant professor, department of statistics, Rice University, 1996-2001.

Technical staff member, statistics group, Los Alamos National Laboratory, 1994-1996.

PhD in statistics, Rice University, 1994.


Recently, the National Cancer Institute approved a $104 million initiative to optimize and enhance proteomic tools. As part of the initiative, a consortia of scientists will be formed to develop protocols to improve the standardization of proteomics experiments. ProteoMonitor decided to talk to Keith Baggerly, a biostatistician at MD Anderson Cancer Center, to ask him about the statistics of proteomics, and issues regarding experimental design.

What is your background in terms of biology and statistics?

I have a degree in statistics from Rice. I worked with a statistics group in Los Alamos for a few years. I came back and I taught statistics at Rice, and around 2000, MD Anderson in Houston was in the process of starting up a bioinformatics section. The option was there to look at 'really cool data of a qualitatively different type.' So I shifted over and originally started working on things like DNA microarray data, and since we get a lot of other types of data, a few years back they said, 'Well, we're going to start looking at mass spec data too. Can you take a look at this?' That's how we wound up looking at it.

When you first studied statistics, were you doing statistics of biology?

Not at all. Pretty much the biology that I know has been absorbed mostly through working at a cancer center for the last four, five years. I had more of a mathematical background.

What drew you to working on biological problems?

Actually, the main interest at first was not the biology. It was the fact that normally statistics, until the past ten years ago, has been focused on the idea that we're going to look at a group of individuals — say a few hundred, or something like that — and we're going to measure a few things on them, and we're going to figure out the general characteristics of what the thing we're measuring looks like in that population. One of the things that characterizes that is that the number of patients, or individuals we're making the measurements on, is typically in excess of the number of things that we're measuring. So if we're measuring height — the standard, simplistic example — we look at 100 people, we're measuring one thing on each of them.

But all of a sudden, with the development of microarrays and these other high-throughput biological techniques, what we have now is a class of data where we're starting with a limited number of patients — say, 50 or so — but for each of those patients we're going to get a measurement of several thousand different genes. So that requires the development of a whole bunch of new mathematical methodology. And I will acknowledge from the geek standpoint that it was the propensity to have access to new mathematics that initially drove this. However, once we've gotten here, we've been going back and forth with the biologists, and a whole bunch of what we've been running into is in many cases our analyses are improved if we understand more about the biology, because we can say, 'Oh, by the way, this standard model won't work here because well, this is a blood vessel, and blood vessels don't behave that way.' So there is actually an interplay that we've been working to develop over time. Basically it's fun. I get to sit around, and it's a legitimate use of my time to sit here and stare at Albert's Molecular Biology of the Cell.

When you first got to MD Anderson, what problem did they put you on?

Initially they threw me at analysis of cDNA microarrays, making measurements on thousands of genes for each of a small number of samples. And the main reason we were doing this is that cancer is a genetic disease. We know that something in the genetic make-up is screwed up. So we're trying to track down, when we're looking at these thousands of things, where are these screw-ups most apparent? And can we piece together some type of biologically coherent story by looking at a whole bunch of these big changes and saying, 'By the way, does this make sense?' That was the initial type of thing.

The group at the time — the bioinformatics section — was just two or three people. So what started happening was that certain people dealing with several different types of cancer said, 'Gosh, this looks like a neat technology. We want to use that.' So they started bringing whole bunches of different types of data, and different types of array data to us, and saying, 'Can you help us with this?' So we kind of got a brief introduction to different types of cancer.

Did you develop a new mathematical or statistical technique?

Well, we have written a few papers on the analysis of DNA microarray data, yes. And that was back in 2001. And this was about the time that statisticians as a group were saying, 'Ooh cool, let's go over and play with these types of problems.' So it was about 2001, 2002 that you had a whole bunch of these types of papers appearing. We had some of the early ones on that, and we've kept our hand in and tried to regularly look at new types of array data and say, 'OK, do we still understand the goals and the limitations of the technology well enough that we can improve the way the analyses are done?'

What would you say are the key things to keep in mind in terms of doing an experiment with microarrays?

At least initially, a whole bunch of problems were associated with getting the measurements clean. If you're looking at a picture held at arm's length, and the picture may be smudge. And that smudge is due to things like the protocol for how they were going to do this weren't standardized, or they had a bad day in the lab that day. What that meant was that there was a whole bunch of focus on getting the technology and the protocol stable.

More recently, these protocols have become more stable, so that's actually drifted back. One of the things this has also brought to the fore is that when you're going to look at a microarray study, or a proteomics study for that matter, many of the interesting statistical questions come at the very beginning when you say, 'By the way, when I acquire my samples, and I set things up at the beginning, how can I arrange the experiment in such a way so that when I see changes, I'm pretty sure they're due to the biology I'm measuring, as opposed to some external factor.' For example, I mentioned that some of these things are smudged. Turns out that for microarray data, or whatever, if you prepare one batch of material in week one, and another batch of material in week two, the results may look somewhat different. And if you take a whole bunch of samples of healthy liver tissue, for example, to establish a baseline, and you run all of the healthy livers in week one, and then you come over here with all of the diseased liver samples, and you run them in week two, then what you've done is you've managed to confound two things of interest. There will be changes due to the disease, but there will also be changes due to the fact that you shifted weeks and preparations.

So actually, if you think about the fact that you want to compare diseased liver with healthy liver, what that would suggest is that at time one you should run some healthy and some diseased, and at time two, again, you're going to run some healthy and some diseased. And what that's going to do is it's going to let us focus on the differences that are due specifically to the changes in biology, as opposed to those that are due to changes in processing. So that's sort of the basic idea of experimental design. That's what we've been traipsing about with — I get to go around and talk to lots of high-tech folks and talk about stuff that was invented 80 years ago and sound cutting edge.

Are people much more aware of experimental design problems now?

I hope so. One of the things we started doing a few years back is looking at mass spec, proteomic data. Basically, the idea is that we've got a whole bunch of people here who would like to use mass spec to find a diagnostic assay for 'my favorite type of cancer.' The push came back in 2002. A large part of that was driven by some high-profile results that said, 'Hey we can use this technology to distinguish ovarian cancer from healthy tissue.' That was a really cool result because at present we don't have a good non-invasive technology for that. Lots of people said, 'Well, if it works for ovarian cancer, maybe it'll work for liver cancer, for bone cancer.'

We said we would like to take a look at the raw data and make sure that we understand how it was done in the first place. This was a scenario where some of the raw data was made public in the first place. When we started looking at the raw data, we started finding problems. I mentioned this issue of running all the normals at one time, and all the cancers at another. It turns out that if you make some of these shifts, you can cause differences in the data that are sufficiently big and visible when looked at the right way. I can tell you, 'That's due to a change in the experimental condition. That's not due to the biology.' So one of the things that we started finding with much of this data was that several of the bigger, stronger separating factors looked to be associated with, 'This group was run on a type of surface, and this group was run with another type of surface.'

The initial study looked at ovarian cancer, healthy ovarian tissue, and also benign ovarian disease. There's three types. What they found with initial profiling was that first off they could separate cancer from normal, but they could also separate cancer from benign disease. But when we looked at the data, in looking at the spectra, there were some really big differences. And those big differences all tell the benign disease apart from the other two groups. That should bother you. The reason that bothered us is that biologically, cancer is a big-time screw-up of cells, so if you're going to see a big shift in cellular responses, you would expect to see a big shift between normals and cancer. You wouldn't necessarily expect to see a big shift between cancer and benign. Benign disease you would expect to see a little difference, and it would probably be closer to normal.

Actually, what we were able to track down is that the benign disease [samples] were run in a completely different fashion from the other samples. So it was not the biology that was differentiating those — it was the experimental design.

Was this the Liotta and Petricoin study?

Yes, it was. Basically we have looked at a variety of data sets that they have produced. They've actually produced at this point four different data sets on ovarian cancer. And the somewhat problematic issue was we found serious signs of flaws in the experimental design in all four. So we went around giving talks saying, 'This may be a really cool way to look at tissues and blood samples. It may be a wonderful assay. But if you don't get the design right, you can be misled.'

This message has percolated up and has become a bit more widely known and appreciated. As a result, many people doing mass spec experiments have said, 'Oh, that is something we should avoid.'

Actually, the thing is, many of the things associated with experimental design aren't necessarily hard things to do as long as you have thought about them beforehand. So I am optimistically saying that I think the designs are getting better. I've seen publications where people are paying attention to a lot of details.

Are you still working these days on the math and statistics of these types of experiments, or are you concentrating more on spreading the message of good experimental design?

Both. We still have data of this type that are being brought to us, where people want an analysis. We are also, particularly over the last year, going out and trying to sell the message of experimental design. We are working on the mathematics of how you process spectra coming off of mass spec instruments such that after processing, they will be more stable.

What kind of analyses are you looking to do in the future?

With respect to proteomics, at the moment, we are learning a lot about the different types of mass specs available to us. We're trying to understand how to better use other types of mass spec techniques — liquid chromatography, for example — to give us more information.

We are also exploring some of the limitations of mass spec that we hadn't run into in the beginning. Mass spec is a beautiful tool for looking at proteins, but it has a fairly limited dynamic range with most of the assays we have at hand. Our instruments aren't sensitive enough to pick up both highly abundant things and things of very low abundance. This is a problem because if you think about a small tumor, it may shed some things into the bloodstream, but when they circulate, they are certainly not the most abundant things there. The most abundant things in blood are blood proteins. So this issue of trying to figure out how to enrich the samples so that you can get past this surrounding stuff of blood to focus on proteins that are specific to the disease of interest is a hard problem.

The other thing that we are trying to do is, for example, we're beginning to get samples where we have mass spec measurements, and we have these cDNA microarray measurements, and we have other measurements on their clinical variables, and we're trying to put together several different types of assays. For example, cancer is a screw-up at the genetic level, so if we've got our central dogma of DNA makes RNA, makes protein, if we can look at assays of the DNA — things that tell us if we have aberrant copy numbers — that's one way of looking for a change. If we can look at expression level — how many copies of mRNA are we getting — that's a second stage where we're seeing changes. And finally, if we can say here's an aberrancy in the proteins produced — can we link these three things together to come up with a more coherent story about how the disease is functioning, and what stages of the process might be most amendable to hitting at it?

File Attachments
The Scan

Another Resignation

According to the Wall Street Journal, a third advisory panel member has resigned following the US Food and Drug Administration's approval of an Alzheimer's disease drug.

Novavax Finds Its Vaccine Effective

Reuters reports Novavax's SARS-CoV-2 vaccine is more than 90 percent effective in preventing COVID-19.

Can't Be Used

The US Food and Drug Administration says millions of vaccine doses made at an embattled manufacturing facility cannot be used, the New York Times reports.

PLOS Papers on Frozen Shoulder GWAS, Epstein-Barr Effects on Immune Cell Epigenetics, More

In PLOS this week: genome-wide association study of frozen shoulder, epigenetic patterns of Epstein-Barr-infected B lymphocyte cells, and more.