Skip to main content

Michael Brownstein On Random Primers to Prepare RNA Samples

Premium

AT A GLANCE

Michael J. Brownstein, group leader, laboratory of genetics, NIMH/NHGRI

MD, PhD in biochemical pharmacology, University of Chicago

Postdoctoral fellow in Julius Axelrod’s lab at the NIH

Research interests: Studies the nervous system and how drugs affect its function; how developmental choices are made in specific populations of neurons.

Published a paper in this month’s Nature Biotechnology entitled “Amine-modified random primers to label probes for DNA microarrays”

Why did you decide to improve the method for preparing RNA samples for microarray experiments?

One of the projects we are funded to work on is the brain molecular anatomy project or BMAP. The goal of that project, initially at least, was to try to learn which genes are expressed in the developing and adult mouse brain. Later on it could expand to human brain studies as well. It was initially envisioned as an anatomical project, but to get some first-pass information, the people who organized the project decided that using arrays might be appropriate. When we began working on this project and imagined how much mouse tissue might be required to do the study using the conventional labeling techniques, we quickly realized that we were really tissue-limited. So we decided that we would need to find a better labeling method that we could use with big-cDNA arrays. And it was in response to that that we worked on developing this method.

 

How did you come to the solution you present in your Nature Biotechnology paper?

We realized pretty early on that direct incorporation must be limited. If you think about the two extremes, that is replacing all of one particular base, like thymine for example, with dye-labeled base, or replacing none of the thymine’s with dye-labeled base, it’s easiest to understand the problem. If you replace none of the wildtype base with fluorescent base, then of course you get no signal at all. If you replace all of the wildtype base with fluorescent base, you get very poor hybridization. And I think that’s why people have sought an optimal ratio of dye-labeled base to wildtype base when they do their first strand synthesis. The problem is that at that optimal ratio, you actually don’t incorporate very much dye into your probe. So we asked ourselves two questions. How can we make more product? And the answer to that is, by using random hexamer priming. It’s actually much more efficient than oligo-dT priming. And how can we incorporate more dye? We couldn’t incorporate more dye into the product without affecting hybridization of the probe, so we reasoned that the only place where we could add dye would be out on the tail. And that’s essentially the solution.

 

Why is random priming more efficient than oligo-dT priming?

You get several different products spaced across the entire transcript. If you use the big targets — we print cDNAs that are about 1.5 kb — you get a much more even labeling of each transcript.

 

You say in the paper that this method is cheaper than using dye-labeled bases?

That’s correct, because the dye-labeled bases are relatively expensive, and the bases that you conjugate to the amino groups are relatively inexpensive. It depends on the vendor that you choose, but it actually makes a pretty big difference. Once you have synthesized the amine-modified random primer, which is not too expensive to do, then the only other expensive components would be the polymerases, which aren’t so expensive, and the dyes. At this point I think we reckon that it costs us about $ 10 to $15 per labeling reaction. You can compare that with any other method that you like. It seems like a lot, but if you look at the Affymetrix kit, for example, you are up in the $100 range.

 

But the main advantage is that you require less total RNA than other methods?

Correct. That’s probably the biggest advantage, though there are a number of other methods that have been described that allow you to use as little as one µl of total RNA. But the advantage that our method has is that at least as we described it, there is no template amplification and no signal amplification. We can easily push down the amount of RNA required by doing template amplification, for example. And there is no reason why the method couldn’t take advantage of the tyramide method or dendramer method or any other signal amplification technique as well. We imagine being able to push down into the range of one to 10 cells without much trouble and still keep the method pretty quantitative. We are in the process of doing that. … It’s quite encouraging.

 

Is your method suitable for all microarray experiments, using both cDNA arrays and oligo arrays?

What I would say is that the good results that we have had reflect to some extent the kind of arrays that we print. As I said, we make nice big targets. If you print oligo arrays, for example, where the target is only a 70mer instead of a 1.5 kb species, then not all of the probes that you wind up labeling can bind to that small target, and consequently you would expect to see, and in fact do see, a good bit less signal. The method is certainly fine for use with other kinds of arrays like printed oligo arrays but won’t give you results that are comparable.

 

In the paper, you compare your method against a conventional one. One explanation why you see such a large non-overlap when looking at two-fold over- or under-expression might be that the two methods favor different sets of genes. Why would that be?

It’s a formal possibility. The conventional method is based on olig-dT priming and if the cDNA that you print represents the middle of the coding region as opposed to the 3’ non-coding region where the A-tail is, then you may have trouble reaching the coding region in a reverse transcription reaction. On the other hand, with our method, since it’s kind of unbiased in where the reverse transcription begins, we will make products that span the entire transcript. And I do have the feeling we get a better, I suppose a more even, labeling and a less biased labeling of probe than you get with the other method.

 

Where do you see the greatest need for improvement in the microarray field?

There are some things that the current microarrays, at least as we print them, are not adept at. For example, if you are really interested in differential splicing, you probably would need to print either a bunch of short cDNAs, or possibly oligonucleotides, corresponding to the various exons of a gene, and in the long run I suppose this will happen. In the short run it’s a pretty daunting thing to imagine, and one would need to use methods to identify the most informative exons initially. So it’s a problem that people aren’t yet addressing with arrays. They are sort of settling for yes-or-no answers right now. Otherwise I’d say that probably the most pressing need is better and better informatics and statistics, at every level. How do you design experiments well, how do you interpret them well, how do you do clustering, how do you know when it changes significant[ly], how do you display the data better, how do you link the genes that you detect to useful databases, and so forth.

 

Are you in favor of standards for microarray experiments?

I am certainly in favor, if it can be done, of having a standard vocabulary for annotation. As things are right now, it’s gonna be very difficult to search one experiment against another. I don’t know how much standardization you can ultimately achieve, as long as people continue to use different methods.

 

Do you use both cDNA and oligonucleotide microarrays?

Right now just cDNA. Our mouse arrays right now have 41,000 elements, about 31,000 of which are unique, and those will increase in size shortly to about 48K. Our human arrays are currently 20K or 21K, and they should increase to 36K shortly. And we have a collaboration with the people at KRIBB, the Korean Research Institute for Bioscience and Biotechnology, to do 5’-end sequencing on all the elements that we’ve printed. So we will soon know the identities of everything on the array, which has proven to be incredibly important for us. We know that there are some gridding errors in the collections that we use, and even though the number is not large, it’s a bit of a problem, and there is no elegant way to sort it out.

 

What is the importance of microarrays to your research?

They certainly play an important part in what we do. I also study human genetic traits, but in trying to understand those genetically determined diseases in humans, it’s often useful to be able to use microarrays since many of the genes that we come up with are ones for which the function is pretty obscure.

The Scan

WHO OKs Emergency Use of Sinopharm Vaccine

The World Health Organization has granted emergency approval for Sinopharm's SARS-CoV-2 vaccine, the Guardian reports.

Scientific Integrity Panel to Meet

According to the Associated Press, a new US scientific integrity panel is to meet later this week.

Trying in the Eye

NPR reports that a study of Editas Medicine's CRISPR therapy for Leber congenital amaurosis has begun.

PLOS Papers on Cerebellum Epigenetics, Copy Number Signature Tool, Acute Lung Injury Networks

In PLOS this week: epigenetics analysis of brain tissue, bioinformatics tool to find copy number signatures in cancer, and more.