Real-Time PCR Technical Guide, Volume Two

Table of Contents

Letter from the Editor
Index of Experts
Q1: What are your criteria for high-quality RNA?
Q2: What pre-amplification methods have you applied and validated?
Q3: How do you measure intra- and inter-assay variation?
Q4: What do you consider when selecting your quantification strategy?
Q5: What standards and templates do you use?
List of Resources

Download the PDF version here

Letter from the Editor

Welcome to the latest installment in Genome Technology's tech guide series, which features insights offered by another eminent team of real-time PCR experts. In the pages to come, these contributors share their valuable and time-tested advice on finessing RT-PCR experiments and analyses.

Quantitative real-time PCR could well be described as an art in the service of science. At any one step of the technology's processes, an investigator is faced with a number of possible ways to proceed. With this in mind, we attempted to formulate questions that get to the principles underlying best practices in RT-PCR experiments. As you'll see, the experts below weigh in on several topics capturing a variety of perspectives and techniques.

Keep this on hand for detailed guidance on isolating optimal RNA, monitoring variation between and within assays, making use of standards, selecting a quantification strategy, and more. Also, be sure not to miss the resource guide, which is comprised of recommended publications and websites that are sure to help make your assays a success.

Finally, many thanks are due to our group of experts, both new and returning contributors.

Jennifer Crebs

Index of Experts

Genome Technology would like to thank the following contributors for taking the time to respond to the questions in this tech guide.

Cristina Hartshorn
Department of Biology
Brandeis University

Jim Huggett
University College London

Tim Hunter
Vermont Cancer Center
University of Vermont

Mikael Kubista
Professor, Head of R&D
TATAA Biocenter

Jo Vandesompele
Center for Medical Genetics
Ghent University Hospital

Marisa Wong
Oakland Children's Hospital

Xiuling Zhang
Wadsworth Center
New York State Department of Health

Q1: What are your criteria for high-quality RNA?

Our lab has focused on optimizing the recovery of RNA, particularly from small samples such as single cells, while minimizing manipulations that can generate nucleic acid loss or shearing. For this purpose, we have developed a method that allows cell collection and lysis, removal of proteins from the nucleic acid backbone, reverse transcription, and real-time PCR to be carried out sequentially in the same tube. This fully single-tube approach, that we named PurAmp (Hartshorn et al., 2005a; Hartshorn et al., 2005b), is based on dilution and does not require binding of RNA to any kind of matrix, thus ensuring a virtually complete RNA recovery. In addition, we feel that a complete deproteinization of the transcripts is essential for RT primer binding and accurate and reproducible template quantification, just as much as it is required for genomic DNA template preparation.

The PurAmp method can include a DNase digestion step but, personally, I prefer to avoid it. I have tried several DNase protocols and always ended recovering considerably less RNA than expected. This is probably due to hydrolysis occurring during the final high-temperature step necessary for the enzyme inactivation and is a common finding, to the point that some commercial pamphlets show a Ct shift of several cycles in the "+ and - DNase" real-time PCR signals of the featured samples. This shift should be almost invisible, considering that the copy number per cell of a given gene is much lower than the copy number of transcripts when the gene is expressed. Hence, you should not have a four-fold drop in template number after DNase digestion. I prefer to amplify DNA and RNA templates together and subtract the genomic DNA copy number from the total. (See my answer to Question 5 for more details.) In summary, I consider high-quality RNA a transcript pool that is accurate in representing the cellular content (no loss), fully accessible to primers, and not degraded to any extent.

— Cristina Hartshorn

In our laboratory in the UK we usually assess the ribosomal RNA bands using an Agilent Bioanalyzer. However, as much of our work is performed in countries in sub-Saharan Africa where this equipment is not available, we will also assess the rRNA by agarose gel electrophoresis. This tells us that our ribosomal bands are OK and our extraction has worked; however, there is increasing evidence that the degradation of these bands may not necessarily mean the degradation of the mRNA. So while I think that using some method of quality assessment is favorable for one's own peace of mind, the methods available at the moment to do this represent the "best worse" option.

— Jim Huggett

You need to establish robust protocols for isolating high-quality RNA. Sample type may dictate what method is best. The first QC check is during the quantitation on the NanoDrop spectrophotometer where the absorbance trace is checked for absorbance outside the 260 wavelength indicating a contaminant. We examine the 260/280 ratio, which should be between 1.8 and 2.1. It is also critical to ensure that you are not using samples in your assay with compromised transcripts. In this regards, we routinely screen the integrity of RNA using the Agilent 2100 Bioanalyzer and make calls on the suitability of that sample based on the intactness of the ribosomal peaks. If the samples show evidence of degradation but are crucial, we run the samples with a housekeeping gene known to be stable within the biological system to see if any differences in expression exist that could be the result of compromised transcripts. We also run (-) RT controls with mediumto high-expression housekeeping genes to check for gDNA contamination in the cases where probes could not be placed over exon-exon boundaries. To check for nuclease activity, samples can be incubated at 37ºC for two to four hours and reanalyzed on the Bioanalyzer to check for degradation.

— Tim Hunter

There are two aspects of RNA quality: sample purity and RNA integrity. Sample purity can usually be improved if the sample can be diluted. We typically test purity by measuring absorption using the Nanodrop. The 260/280 absorption ratio should be 2 to 2.1 and we want “good-looking” absorption spectra, which is essentially a typical nucleic acid absorption spectrum with negligible light scatter. Absorbance around 230 nm should not be high. We test the RNA quality by recording an electropherogram using an Experion (Bio-Rad). Of course, optimally one likes very clear 18S and 28S peaks. But RNA integrity cannot be improved and archival samples, fixed samples, and samples prepared from organs rich in degrading enzymes are often of poor integrity. This is something one has to live with. The two main reasons to test RNA quality are to learn what RNA quality a particular study generates, so appropriate RT-qPCR protocols are used, and to identify samples that are of poorer quality than normal in a study, which may lead to bias or incorrect results.

— Mikael Kubista

High-quality RNA should be intact and free of inhibitors. We routinely use a capillary gel electrophoresis system to assess the ratio of the 28S and 18S bands, which is both fast and needs only minimal amounts of RNA. Important to note, however, is that the ratio can be tissue- or cell typespecific, so ideally you should compare your samples (of unknown quality) to an intact sample of the same cellular origin. While RNA quality control is routinely performed in microarray studies, many people in the field of qPCR have just begun to take this factor into account. We have recently published that it is extremely important to assess the quality of your RNA in PCR-based assays, as not all genes display the same level of degradation, something which can seriously flaw your experimental conclusions (Perrez-Novo et al., 2005). Our main conclusion is that reference genes show differential stability in intact versus degraded samples, and that one should not compare intact and degraded samples, especially if one is interested in subtle expression differences.

The assessment of ribosomal peaks is currently the golden standard, but is only a surrogate marker for the intactness of your mRNA fraction. Together with other experts in the field, we are developing a PCR-based assay where we compare the ratio of 5' and 3' amplicons for certain genes. By comparing this ratio in your unknown samples to an intact control sample, we should be able to measure mRNA intactness using a PCR assay, which is much more relevant compared to gel electrophoresis-based assessment or rRNA.

To evaluate the purity of our RNA preparations (e.g. absence of inhibitors), we are implementing the qPCR SPUD assay, which is easy to use and assumption free (Nolan et al., Anal Biochem, in press).

— Jo Vandesompele

For RNA samples isolated from human tissues, my criteria for high-quality RNA are:
1. rRNA Ratio [28s/18s] ≥ 1.1, and RNA integrity number (RIN) ≥ 7 (analyzed using Bioanalyzer 2100)
2. 28s and 18s bands are obvious (electrophoresis on a denaturing agarose gel)
3. Ratio [260nm/280nm] ≥ 2.0 (measured on a spectrometer).

— Xiuling Zhang

Q2: What pre-amplification methods have you applied and validated?

To date we have not had to perform a preamplification step to generate enough starting material for a low-abundance target or a sample with low recovery of nucleic acid. A specific priming strategy in the reverse transcription reaction often overcomes issues with a low expressing target.

There are several systems now on the market that address this issue well, but it would be critical to validate the pre-amplification step to ensure no bias is being introduced into the assay.

— Tim Hunter

We participated in a multilaboratory study that validated [NuGen's] Ovation technology that amplifies mRNA, and we also beta-tested [Applied Biosystem's] TaqMan PreAmp Master Mix Kit to pre-amplify cDNA before qPCR. Both work well in our hands. The latter we use to study expression of multiple genes in individual cells. Primers for all amplicons are added to the sample at a concentration much lower than in regular PCR. After 14 temperature cycles the reaction is halted. The products are diluted and used as templates in a second PCR using a single primer pair and probe per reaction. Up to 100 targets can be pre-amplified in parallel. The obvious validation in pre-amplification is to perform preamplification and direct amplification in parallel on some more concentrated samples and test how well the expression ratios of the genes are preserved. We also use a highly expressed gene, such as the 18S, as internal control for the pre-amplification.

— Mikael Kubista

We have evaluated several whole-transcriptome amplification methods, both T7 RNA polymerasebased linear RNA amplification (either sense or antisense) and PCR-based exponential amplification methods. Our most important quality criterion is the conservation of expression ratios between samples, not between genes. Most (if not all) procedures introduce a (reproducible) bias with respect to amplification rate of the different transcripts (which [means] that you cannot easily compare the expression levels of genes after a pre-amplification step; something which is already hard to do without amplification). Luckily, the good kits more or less conserve the expression ratio between samples, which is what most of us are interested in after all. Apart from the whole genome/transcriptome pre-amplification methods, other methods have recently been developed whereby a selected panel of genes are pre-amplified using a limited cycle multiplex pre-amplification step, followed by dilution and monoplex amplification of each target. When you know in advance which genes need to be studied, these methods seem also promising.

— Jo Vandesompele

Q3: How do you measure intra- and inter-assay variation?

The development of LATE (Linear-After-The- Exponential)-PCR in our laboratory has recently given us additional and more sensitive tools to monitor inter- and intra-assay variability. LATE-PCR generates single-stranded amplicons with very high efficiency, offering a wide array of advantages (Sanchez et al., 2004; Pierce et al., 2005).

I have adopted LATE-PCR for my recent gene expression studies because linear amplification allows me to measure simultaneously more than one transcript in a single cell, even if the RNAs are present in largely different amounts. In a traditional (symmetric) duplex PCR assay the first, more abundant amplicon is generated exponentially and its real-time fluorescent signal soon reaches a plateau. By this time, the pool of PCR reagents is considerably depleted, thus decreasing the efficiency of amplification of the second- and less-abundant template. In contrast, during a real-time LATE-PCR assay all amplicons are accumulated linearly and all fluorescent signals have a constant slope until the end of the reaction, never reaching a plateau. This strategy ensures efficient co-amplification and reliable quantification of multiple mRNAs independently from their relative abundance.

I have found that the slopes of the real-time LATE-PCR fluorescent signals are an extremely sensitive way to monitor intra-assay variations, as they are affected by fine differences in the assay composition that cannot be seen with symmetric PCR. This is particularly relevant to my work because I perform cell lysis, RT, and PCR by serial dilution in the same tube. I feel that investigating these properties of LATE-PCR in conjunction with RT is important as they open the door to the possibility of end-point gene expression quantification.

— Cristina Hartshorn

Measure variation with replicates!

I am increasingly of the opinion that intra-assay variation, while the simplest to measure, tells us very little. This is because if you have a robust, efficient PCR assay it really will not contribute much variation and so we are all happy with our tight replicates. I feel this is misleading.

Inter-assay variation, on the other hand, is where the real problem lies, especially with RT-PCR. This is because not only can the sample you are measuring be biologically variable, but the plethora of steps required to obtain a cDNA sample is ideal for introducing error. Consequently I would favor replicating to control for the most variation, i.e. ideally prior to extracting the sample. Try to replicate your experiment to increase the numbers in each group. This will always require more work and money but your findings will be more robust.

— Jim Huggett

External controls or internal quantification standards are used to measure the reproducibility of an assay over a certain amount of time. The external control can be a sample generated in the lab or a reference sample purchased by a manufacturer. Review of the correlation of standard deviations generated within the assay run and over time will allow assessment of inter- and intra-assay variation.

— Tim Hunter

Inter-assay variation can be assessed by including an identical sample in all qPCR runs, and make sure that the standard deviation of the Cts is not much larger than the standard deviation of duplicate samples within one plate.

Intra-assay variation is controlled by duplicate samples. The duplicates should be placed as early as possible in the experimental setup to account for as much as possible of the variation. The technical variation of the qPCR, for example, is insignificant compared to that of the reverse transcription (Ståhlberg et al., 2004), and most likely also compared to the extraction step. Sampling may also be a large source of variability, particularly when analyzing tissue samples, which often are heterogeneous. Best is, of course, if one can collect homogeneous material using laser microdissection, but even then one should collect several samples and compare them. If sample size is very small, the large natural variation in expression among individual cells will become important (Bengtsson et al., 2005).

— Mikael Kubista

For gene expression assays, we use duplicated reactions in the same plate, and proceed with calculations using the mean quantification cycle value and its standard error. Along all further calculations, we propagate the error using the delta method (based on a truncated Taylor series expansion), to finally obtain normalized and rescaled relative quantities with their corresponding errors, reflecting the intra-assay variation. Of note, there is no need that the reference genes and the gene of interest are tested in the same plate. These are independent assays and have nothing to do with each other (except from being tested on the same templates). Inter-assay or run-to-run variation is an often underestimated kind of variation. Many people believe that quantification cycle values are absolute values and can be compared across runs. However, such values are only meaningful within a particular run or plate. To compare results from different plates, one needs one (or preferably more) inter-run calibrator(s), i.e. templates that are tested on both plates (such as the same positive control(s), unknown sample(s), or standard dilution points). Knowing that these should be the same, one can correct for possible differences. The more inter-run calibrators are used, the more accurate and precise plates can be calibrated. Of course, errors should be properly propagated during the calibration procedure (implemented in qBase).

— Jo Vandesompele

I measure intra-assay variation by averaging the coefficient of variance of sample triplicates run the same plate. I measure inter-assay variation using a set of standards that are run on every plate of the same gene assay.

— Marisa Wong

We measure intra-assay variation by performing two or more reactions using the same sample, and calculating the mean Ct, standard deviation, and coefficient of variance. For quantification experiments, we usually do duplicate or triplicate reactions for all the samples. If the cycle number (Ct) difference between duplicate or among triplicate reactions is smaller than 0.5, we consider the variation acceptable, and we would use the mean Ct as the final result.

For inter-assay variation, we include one common sample in all batches of the same quantification experiment, and calculate the mean Ct, SD and CV. [This method] can yield comparable results among different batches of experiments.

— Xiuling Zhang

Q4: What do you consider when selecting your quantification strategy?

I think that careful primer design is essential to obtain very specific and efficient template amplification. New and improved software programs are available to researchers for this purpose. Our laboratory is focused on high-quality PCR and has devised a family of reagents that prevent mis-priming and primer dimerization (Elixirs, patent pending). As a final test, we routinely confirm the specificity of our amplicons by sequencing. This is particularly convenient when using LATE-PCR or RT-LATE-PCR because the amplicons generated are already single-stranded.

— Cristina Hartshorn

Much of our work is performed in the developing world where our laboratories are less high-tech and researchers are required to be able to perform numerous techniques with often limited training. Consequently our quantification strategies are designed with simplicity in mind; we use "absolute quantification" and report approximate copy number, favoring this as it is more transferable to non-molecular specialists. Furthermore, by performing standard curves, we automatically incorporate the estimated assay efficiency into the reported result — a fact that is often overlooked when simply dealing with delta-Ct.

— Jim Huggett

The type of assay being developed and the question being asked will often drive what type of quantification system a lab decides to employ. In the case of quantifying bacterial load in a tissue sample, a standard curve method of quantification would be used. A standard curve generated with known copy numbers would be applied to the unknown samples for an "absolute" quantification strategy. Sensitivity is the key in developing an assay that can detect down to very low copy numbers or even one copy, to assess low bacterial load.

When selecting the quantification strategies for a gene expression assay, PCR efficiencies are used to determine if targets and housekeeping genes are amplifying at similar rates. When matched PCR efficiencies cannot be established to use the comparative Ct method of quantification, primers are redesigned or relative quantification methods are used to compensate for the differences in rates by generating a relative standard curve.

— Tim Hunter

It depends very much on what we are looking for. Standard curves based on purified template or PCR product are very easy to construct, but they are not very reliable because matrix effects are not accounted for. If one needs to determine the amount of a particular mRNA in a sample, one should spike it with the target mRNA (produced by in vitro transcription) and use the method of standard additions. If there is enough sample template, one can perform serial dilution to account for the matrix effect. But this is less reliable because the contaminants are diluted and the PCR efficiency may change with the dilution. For relative quantification, best case is if one has two genes that respond reciprocally to the studied conditions. Their relative expression can be measured very accurately and can be a powerful disease indicator (Ståhlberg et al., 2003). If there is only one reporter gene, its expression should be normalized with that of one or preferably more than one validated reference gene. We supply panels of potential control genes and software such as GeNorm can be used to identify the most appropriate ones for each particular study. If the goal instead is to classify samples, such as positive and negative for disease, it's better to measure the expression of many reporter genes and use classification methods based on expression profiles using for example the GenEx software. In profiling studies reference genes are not used.

— Mikael Kubista

Having validated the assays (partially covered in Q5 and in RT-PCR vol. I) and assessed the samples' quality (partially covered in Q1), the first big step in our quantification strategy is validation of candidate reference genes (covered in RT-PCR vol. I). Another important part of our quantification strategy involves the minimization of experimental variation, which is best achieved by following the 'sample maximization' setup (trying to put all, or as many, samples on the same plate). The opposite experimental setup tries to maximize the number of different genes simultaneously assayed on the same plate. The latter setup (amongst others used in prospective studies), however, needs proper inter-run calibrators to deal with the inherent run-to-run variation (more info in the previous question). After establishment of the proper experimental setup and performance of the actual experiments, we finally apply an advanced quantification model, based on a proven delta-Ct method with gene-specific efficiency correction and multiple reference gene normalization [as implemented in our free software qBase (Hellemans et al., in preparation)].

— Jo Vandesompele

I think one of the most important aspects to consider is how comfortable one feels with the units of the data generated. For example, some may find it difficult to understand or express data in Cts or arbitrary units and thus may prefer to use a standard curve.

— Marisa Wong

First consideration is the aim of quantification study: whether to quantitate the absolute copy number or relative copy number.

Second consideration is whether standards are needed, and what type of standards should be used.

Third consideration is what normalization factors are going to be used.

Fourth consideration is the details of quantification experiment, such as primer design and removal of DNA contamination.

— Xiuling Zhang

Q5: What standards and templates do you use?

For my gene expression studies, I always select primer pairs inside an exon of the gene of interest so that amplification of genomic DNA and cDNA sequences during PCR will generate the same product. This strategy allows me to quantify template numbers in my samples by comparison with a standard curve obtained preparing serial dilutions of commercially available genomic DNA. The efficiency of PCR is exactly the same for the standard and the unknown samples because the primers used and the amplicon produced are identical in the two cases.

This approach can be applied to the study of both intronless genes and genes with introns, and presents additional advantages. It can be used to quantify RNA as well as DNA without the need for a different assay. The presence of DNA is useful to confirm the success of PCR in the absence of gene expression; for this reason and because of the problems generated by DNase treatment, I prefer to keep in my samples and co-amplify both RNA (cDNA) and genomic DNA. I then subtract from this "total template copy number" the number of genomic DNA copies present in the sample and thus calculate the number of mRNA copies. Genomic DNA copy numbers can be measured by analyzing "No RT" samples in parallel with "+RT" samples, but this is unnecessary when working with single cells, where the number of a gene's copies is known. On the other hand, the presence of one or two copies of genomic DNA in the absence of RNA provides proof that a cell was successfully transferred to the test tube and that the absence of gene expression was not a manipulation artifact (see Hartshorn et al., 2004, for more details).

— Cristina Hartshorn

We use standards with all our assays (and report using "absolute numbers"). The template we use is the amplicon of interest cloned into a vector which is then linearized. To aim to get the respective assays as similar as possible we include in our reaction master mix 250 μg/ml tRNA (included with standard, controls, and unknowns).

— Jim Huggett

We often use standards in the generation of standard curves to determine quantification of unknown samples. The standards we use vary depending on the question asked and the accuracy desired. For absolute standard curves, we often use plasmids with known copy or molecule numbers. For a relative quantification, we often use a sample known to be a high expresser of the target and perform serial dilutions to cover over three to six logs, depending on sensitivity needed. We have also used concentrated PCR products as standards, but would caution anyone who uses this method to be extremely careful since it is so easy to contaminate your work area or tools when using this approach. We rarely use PCR products as standards for this reason.

— Tim Hunter

We use standards as a quality control during assay development and setup. For this we use purified PCR product because it is easy to generate. But purified template is not a suitable standard for quantification, even when used as a spike, because it is shorter than the natural template that dominates in the first PCR cycles, and may be amplified with quite different efficiency. A linearized plasmid is then a better choice. But these are only good for DNA quantification. For RNA quantification, it is preferable to use an RNA standard. The Universal Reference RNAs is an option. We are also closely following the work done by the External RNA Controls Consortium, which is testing 140 control sequences, several of which are artificial.

— Mikael Kubista

Following our thorough in silico qPCR assay evaluation using an automated pipeline (Pattyn et al., 2006), we use standards to determine the efficiency of our qPCR assays. We use either serially diluted genomic DNA or Stratagene QPCR Reference Total RNA (six 4-fold dilution points, from 64 ng down to 0.0625 ng/reaction), assayed in triplicated reactions. To minimize adsorption of low copy number template to the test tube, Poisson sampling effects, and autohydrolysis, we dilute the template in 10 ng/μl lambda phage carrier DNA (others successfully use E. coli tRNA as carrier). Obviously, our genomic DNA standard can only be used if our gene expression assay primer pair does not span an intron. Often, we try to design such primers pairs, because these should work on genomic DNA (being an easy positive control, to see if the assay works). If you don't know in advance if your gene of interest in expressed, this is a straightforward way to validate the assay.

More recently, we are using long oligonucleotides as template for our standards (six 10-fold dilution points, from 1,000,000 down to 10 molecules), again diluted in carrier DNA.

Just a small note on PCR efficiency determination using a serial dilution: While this method is still considered the golden standard, almost no one calculates the error on the estimated efficiency, nor propagates this uncertainty in the downstream calculations. Interestingly, the formulas provide clear insight how the error can be minimized, both by expanding the dilution range, and by using more dilution points. Our freely available qBase software for automated qPCR data analysis and management is able to calculate the error on the estimated efficiency and to propagate it along the further calculations.

— Jo Vandesompele

I always create standards for each of my assays. These are typically PCR amplicons containing the target area and are generated from cDNA templates.

— Marisa Wong

List of Resources

Our RT-PCR experts referred to a number of papers and Web tools, which we’ve compiled below.

Publications

Bar T, et al. (2003) Kinetic Outlier Detection (KOD) in real-time PCR. Nucleic Acids Res, 31: e105.

Bengtsson M, et al. (2005) Gene expression profiling in single cells from the pancreatic islets of Langerhans reveals lognormal distribution of mRNA levels. Genome Res, 15: 1388-1392.

Hartshorn C, et al. (2005a) Rapid, single-tube method for quantitative preparation and analysis of RNA and DNA in samples as small as one cell. BMC Biotechnol, 5: 2.

Hartshorn C, et al. (2005b) Laser zona-drilling does not induce hsp70i transcription in blastomeres of 8-cell mouse embryos. Fertil Steril, 84: 1547-1550.

Hartshorn C, et al. (2004) Optimized real-time RT-PCR for quantitative measurements of DNA and RNA in single embryos and blastomeres. In: Bustin SA, ed. A-Z of Quantitative PCR. International University Line: La Jolla; pp.675-702.

Pattyn F, et al. (2006) RTPrimerDB: the real-time PCR primer and probe database, major update 2006. Nucleic Acids Res, 34 (Database issue): D684-8.

Perez-Novo CA, et al. (2005) Impact of RNA quality on reference gene expression stability. Biotechniques, 39(1):52, 54, 56.

Pierce KE, et al. (2005) Linear-After-The-Exponential (LATE)-PCR: primer design criteria for high yields of specific singlestranded DNA and improved real-time detection. Proc Natl Acad Sci USA, 102: 8609-8614.

Sanchez JA, et al. (2004) Linear-after-the-exponential (LATE)-PCR: an advanced method of asymmetric PCR and its uses in quantitative real-time analysis. Proc Natl Acad Sci USA, 101: 1933-1938.

Ståhlberg A, et al. (2003) Quantitative real-time PCR method for detection of B-lymphocyte monoclonality by comparison of κ and λ immunoglobulin light chain expression. Clin Chem, 49: 51-59.

Ståhlberg A, et al. (2004) Properties of the Reverse Transcription Reaction in mRNA Quantification. Clin Chem, 50: 509 - 515.

Websites

Endogenous Control Gene Panels
http://www.tataa.com/referencepanels.htm

Experion
http://www.bio-rad.com

External RNA Controls Consortium (ERCC)
http://www.cstl.nist.gov/biotech/Cell&TissueMeasurements/GeneExpression/ERCC.htm

GenEx
http://www.multid.se/GenEx/genex.htm

GeNorm
http://medgen.ugent.be/genorm

Nanodrop
www.nanodrop.com

Ovation RNA Amplification System
http://www.nugeninc.com

Qbase
http://medgen.ugent.be/qbase/

TaqMan PreAmp Master Mix Kit
http://www.appliedbiosystems.com

Universal Reference RNAs
http://www.stratagene.com

Acknowledgments

Many thanks to Xinxin Ding of the Wadsworth Center for advising on the answers submitted by Xiuling Zhang.