Skip to main content
Premium Trial:

Request an Annual Quote

Q&A: New Microarray Technology Combines Sensitivity of Western Blots, Scalability of RPAs

Premium

The story originally ran on Jan. 28.

By Tony Fong

RJones.jpg

Name: Richard Jones
Position: Assistant professor, Ben May department for cancer research, Institute for Genomics and Systems Biology, University of Chicago, 2006 to present
Background: Postdoctoral fellow, chemistry and chemical biology, Harvard University, 2002 to 2006; postdoctoral fellow, molecular and cellular biology, Harvard University, 2001 to 2002; PhD, biochemistry, the Center of Cancer Biology at the Albert B. Alkek Institute of Biosciences and Technology, and Texas A&M University, 2000

A scientific team led by researchers from the University of Chicago has developed a new assay that combines the sensitivity of Western blots with the scalability of reverse-phase lysate arrays, enabling scientists to study a cell's protein network in ways that they said have been impossible up to now.

In a study published Jan. 24 in the online edition of Nature Methods describing the technology, called microwestern arrays, the authors said that while Western blots are a "powerful protein-analysis method" they require a "relatively large amount of sample and a great deal of human labor," and so have had limited use for large-scale protein studies.

Reverse-phase lysate arrays can be used for quantifying large numbers of proteins from limited amounts of samples, but the technology "lacks confirmatory data for signal veracity," the authors said. Mass spectrometry, meanwhile, can lead to the discovery of new proteins, but the sample volume necessary for mass spec-based research "limit the number of conditions that can be analyzed" with such technology.

In contrast, the microwestern arrays are scalable like reverse-phase arrays, but reduce sample complexity and produce signals "that can be related to protein size standards," as with Western blots, they added.

Microwestern arrays "should be useful for analysis of proteins from cell lines and tissues from which there are sufficient lysates to print hundreds of MWAs that could be distributed en masse in an analogous manner to spotted DNA microarrays for interrogation with the user's choice of antibodies," the researchers said. "The ability to obtain information regarding hundreds of proteins with the MWA method should allow advances in our understanding of cell context-specific networks underlying human disease when combined with appropriate computational modeling methods."

ProteoMonitor spoke this week with Richard Jones, a professor of cancer research at the University of Chicago and the corresponding author of the Nature Methods study, about the technology and the application of it in the analysis of a cancer cell line with elevated levels of epidermal growth factor receptor. Using their method, Jones and his colleagues measured 91 phosphosites and 67 proteins at six time points in A431 human carcinoma cells.

Below is an edited transcript of the conversation.

Briefly describe your microwestern arrays and how they're different from existing protein arrays.

There are a couple of kinds of standard protein arrays, one of which contains functional proteins. …Then there is another kind of array that people use simply to look in cells for how much of a protein is present or whether it might be modified in some way and those are typically called lysate arrays.

In those cases, most of the proteins have been deactivated following cell lysis, so you put a detergent-solubilized protein mixture onto nitrocellulose-coated slides, and then you would typically add some kind of an affinity reagent, like an antibody, to measure the proteins in the solution.

That method was introduced in the early 2000s and it's had a number of clinical applications, but it's been limited by the fact that most affinity-based reagents are not so selective [that they can] detect a single protein within a complex mixture of other proteins.

The lysate arrays are the type that you're trying to improve on?

That's the one I'm trying to improve upon.

Traditionally, cell biologists have always electrophoresed or had some method of reducing the complexity of the mixture to allow their affinity reagents to better work, so that if you somehow purify the proteins into different fractions or electrophoresed them on some kind of a gel, you would then potentially only have a much smaller number of proteins under any single area of whatever you're looking at, and then your affinity reagent would no longer have to distinguish between 1,000 similar proteins.

[ pagebreak ]

You would only have to distinguish potentially between 10 proteins and then most of the signal that you might obtain would be from the thing that you're specifically looking for rather than all the other cross-contaminating interactions.

Our method basically is a way that mimics most of the electrophoretic procedures that cell biologists have traditionally used in large scale — large from the size perspective where you take, let's say, 10 microliters of some mixture and load it to the top of a gel.

We actually take the gels and turn them on their side and we use a non-contact … microdispenser to dispense a few hundred picoliters of sample onto the top of the gels, and then rather than dunk the gel underneath the solution like you would typically do with a gel, we actually just apply buffer strips to either side of this gel that's been turned on its side, and all of the samples end up getting separated for 9 millimeters in one direction.

Because the sample that we're depositing has such small diameter, you actually don't need to move them very far in order to get a substantial separation of the small proteins from the large proteins.

Is the breakthrough of this technology that you don't need much cell material of antibody to find your proteins of interest?

So that's been done before. People have microarrayed samples onto pieces of nitrocellulose, but the problem is that maybe only 5 percent of commercially available antibodies will give you a signal that can be related to the actual amount of protein in the sample.

The real breakthrough here is that we've integrated electrophoretic separation with microdeposition of protein samples. Now you're not only getting away with very small amounts of cell material, small amounts of antibodies, but … we have size standards that we're arraying right next to all of our samples, so we can say, 'The signal that we're seeing is from the protein consistent with 50 kiloDaltons, and I know my protein is 50 kiloDaltons.'

And so I don't have to worry about the other larger protein that's 100 kiloDaltons, or the small protein that's 30 kiloDaltons. I can focus on that signal, and that's what ... thus far no has come up with a way to integrate.

The way that folks have done this before is to typically take a pipe like an HPLC … and put all of the samples through that. But that is not a scalable way to do things unless you want to purchase hundreds of those machines. You typically would have to queue up and put every one of your samples through a pipeline.

However, in this particular case, we're enabling the scaling ability of a microarrayer and all of our samples are going onto a piece of gel. They're being electrophoresed at the same time … and you get all the scalability that comes along with microarraying, but you also get the size-specific signals.

Is the scalability a result more of the engineering or the way you manipulated the biology?

A bit of both. What I think this method enables is, for example, someone working in stem cells, they're only able to sort, let's say 10,000 cells. They want to know what signals will turn on those cells. Typically, since you really can't trust the signals from these microarrayed lysates except with a select number of antibodies, there's been no way to interrogate what cell signals are turned on.

With this method, you could look at a few hundred select protein modifications. Again, the major breakthrough is you can scale electrophoretically separated things on a small scale, so that you can now get to biology where you just want to look at a few hundred, a few thousand cells.

The second thing is we collaborated with [Douglas] Lauffenberger's group at MIT to start to ask, 'What kinds of computational methods would you even need if you [wanted] to be able to reproducibly gain information on several hundred proteins in a cell at the same time?'

[ pagebreak ]

This has never been possible either. The state of the art before this was … the Luminex bead sorting or cell sorting. And you would be examining seven or eight proteins and you would examine them over and over and over and then use statistical methods, Bayesian methods to follow very small numbers of proteins. So the next breakthrough is trying to come up with computational methods.

In this case, we used a couple of methods that really hadn't been used on things much larger than five or six [proteins], and we extended the analysis to I think 17 proteins. Moving forward, we're trying to develop that so we can look at the relationships between hundreds of proteins, and that's hopefully where we'll go for our next story.

Was this computational approach you used borrowed from another research project, or was it developed specifically for what you were doing?

Bayesian networks have been used for other types of biology, [and] they typically have been used for transcriptional networks or for small numbers of proteins from cells examined over and over because you need a lot of information to feed Bayesian networks.

The real advantage of Bayesian networks is they give you a sense of edges, of which proteins are 'talking' to others as opposed to things that are just related to each other.

There has been no method that would enable the reproducible analysis of more than five or 10 proteins, so there's never been an impetus for going beyond that. For mass spectrometry, you're obtaining information about large numbers of proteins in a somewhat irreproducible fashion. From experiment to experiment you'll probably gain information on a completely different subset of proteins. And you won't really have the dynamic range or the sensitivity that you would with these immunoblot-based methods.

So we're about 1,000-fold more sensitive than that. We're now actually able to pick out rather than be relegated to the proteins that we can actually see like with mass spec, where you're much less sensitive and you're focused on the more abundant things.

We can say, 'No, we're interested in this subset of things that we know is more likely to be driving whatever our biological question is.'

The other thing with Bayesian networks is it's difficult to go beyond 17 or 18 proteins because the computational power required … gets really out of hand.

So there have to be additional mathematical tools to break networks down into sub-networks so that you can end up applying that approach anyway because after you start increasing the number of connections for every protein, the problem starts blowing up super-exponentially.

Is your technology limited by the quality of antibodies?

Sure, I always tell people that whatever limitations apply to an immunoblot or a Western blot are going to apply here.

If you have antibodies that work well for a Western blot, they work identically here with the exception that you get slightly better resolution between sizes of proteins on the Western blot than you would on a microwestern blot.

But all the physical properties, the principles that make a Western blot work, apply here.

The paper says that you can use this technology to validate antibodies.

It's a huge area. The cool thing is you can use it for validation: you can go back and take the antibodies that you validated and use them for genome-wide studies to study several thousand transcription factors, let's say, in a biological process using all the antibodies that you validated.

[ pagebreak ]

It's a whole new area of biology [that's been] opened up by the ability to obtain validated antibodies and to actually know that the things that you're looking at … you can have a lot more confidence in them.

Did you envision this more as a technology to be used for discovery and identification or for validation?

It's way more than that. We're using a linear read-out technology [from LI-COR] rather than chemiluminescence so that we can have linear signal increase with linear increases in proteins.

What that means is that beyond just validating antibodies, you can go in and interrogate large areas of biology [and find out] what proteins are increasing in abundance [or] decreasing, how you can isolate nuclear and cytosolic fractions and say, 'Where are the proteins going and how are they getting modified?' just by using modification-specific antibodies.

Everything that you use for Western blots, you can do now except with experiments that are hundreds of folds higher and using amounts of reagents that are hundreds of folds lower.

If you're using this for identification, are you using this for targeted analysis or for shotgun analysis?

Never for shotgun. I envision a couple of different sides of the coin. On the one hand, let's say you have a comprehensive set of human transcription factor antibodies. You can monitor them all and you don't need to know necessarily what things to look for.

You just go and do the experiment and you ask, 'What transcription factor went up? Which ones went down? How did that compare to mRNA, let's say? Are there perhaps microRNAs that are regulating translation of mRNAs that we've never been able to see before because we've only been able to look at mRNAs?'

On the other hand, let's say you have a collection of transcription factors that you already know are important in your biological system, but you just want to interrogate a lot of different experiments. You want to say small molecules, let's say, and see which ones affect your 10 things.

Well, you can do that too. You just pick out the 10 that you already know that you want to look for and just monitor those reproducibly and quantitatively over a series of experimental perturbations.

Aside from the cost issues that you mentioned in your paper, are there other advantages that microwestern arrays have over a mass spec?

Well, there's the sensitivity … and reproducibility. Typically, immuno-based methods are between 1,000- to 10,000-fold more sensitive [than a mass spec].

Metaphorically, let's say you wanted to study the United States, but you could only study places that were several thousand feet in altitude and that's all you could reach. So you kept trying to base your analysis on those parts of the pyramid that are at the very top because that's all that your technique could do without heavy tweaking versus looking at a whole picture at the same time.

With an affinity-based method you're able to detect a lot of proteins that are just much lower in abundance, and you could reproducibly see them from experiment to experiment. It's not like you just see them once every five experiments because they're in the noise floor, such as you would with mass spectrometry, or with 2D DIGE, or any other non-affinity based method.

What's the drawback?

The drawback is that at this point, we don't have antibodies to everything. It would seem to me that the logical thing would be for the NIH at this point to start asking questions how we can start making affinity-based methods that are directed at a lot of different proteins.

The other point that should not be neglected is that with mass specs, you can begin to see new things: You can see new modifications that you didn't know existed before, so you can actually discover a lot of things.

[ pagebreak ]

With affinity-based methods, you have to know the thing exists because you have to make a version of that protein and then direct antibodies to it. The idea is that if you already know that that thing exists … all the technologies are available for synthesizing those things today and making antibodies for them. It's just that, thus far, it's obvious there's no reason to make antibodies to all those things — there was no method or technology that could take advantage of them until now.

Can this technology be used to clinically validate things?

Sure, we've done analyses of tumors, we've done analyses of tissue. Previously, in the clinic, people used in situs because it's easy, it's what pathologists are used to, but in in situs you have this problem where the antibodies may be recognizing your particular protein in a tissue, but it may also be cross-reacting with a lot of other proteins.

I think it would be complementary to be able to separate small amounts of clinical materials [and] see what all the proteins are that are there.

What key biological questions are you currently trying to address with this technology that you haven't yet?

There are a number of areas of biology that my lab's interested in that are basically about how signaling networks are wired in the context of cancer biology. But there are lots of other folks who are doing stem cell research, doing genome-wide association studies with HapMap samples trying to understand how mRNAs are regulated by variations in the human genome, so we're trying to ask similar questions, like 'How are expression levels of proteins varying with differences in variation in the human genome?'

We're asking lots of other questions in the context of cancer samples, trying to compare protein expression with genome variation in actual cancer tissues — basically a whole host of cell biological domains that were never addressable at the level of proteins, that everyone's always used mRNA expression arrays or genome sequencing methods or high-throughput sequencing, basically trying to extend those [methods] for the analysis of proteins.

What about in terms of the technology? What are you doing to optimize it or to extend its capabilities??

Very little that I can describe. But we're trying to get down to smaller amounts of materials, higher numbers of samples, higher sensitivity — pretty much all of the areas that people have thought about before for regular Western blots.

We're trying to extend some of those ideas so that we can get down to the level of analyzing single cells because in the end, there are lots of questions in cancer stem cell biology about whether cancer cells evolve into stem cells and go back or whether they go only one way. And many of those questions can be addressed only at the level of single cells.

And even in human stem cell biology where you're not talking about cancer, typically folks are trying to isolate several hundred cells and understand how they're talking to each other [or] how they're talking to themselves, even.

You're basically trying to get the assay down to be more sensitive so that we can start to answer those questions.

The Scan

For Better Odds

Bloomberg reports that a child has been born following polygenic risk score screening as an embryo.

Booster Decision Expected

The New York Times reports the US Food and Drug Administration is expected to authorize a booster dose of the Pfizer-BioNTech SARS-CoV-2 vaccine this week for individuals over 65 or at high risk.

Snipping HIV Out

The Philadelphia Inquirer reports Temple University researchers are to test a gene-editing approach for treating HIV.

PLOS Papers on Cancer Risk Scores, Typhoid Fever in Colombia, Streptococcus Protection

In PLOS this week: application of cancer polygenic risk scores across ancestries, genetic diversity of typhoid fever-causing Salmonella, and more.