Real-time PCR, Vol III

Table of Contents

Letter from the Editor
Index of Experts
Q1: How do you measure the relative levels of two genes within the same RNA sample?
Q2: How do you predict assay quality prior to experimental evaluation?
Q3: How do you measure and control for efficiency?
Q4: Which computational tools are best for data analysis?
Q5: What is the minimal assay information you should report?
List of Resources

Download the PDF version here

Letter from the Editor

It was a little more than a year ago that Genome Technology kicked off its technical reference guide series with our first real-time PCR guide. I distinctly remember being a little nervous about producing two more volumes on the topic in the space of a year. As it turns out, my anxiety was completely unwarranted — and even naïve. There actually seems to be enough PCR-related experimental issues to fill more tech guides than we could have produced in one year.

This is due to the technology's wide applicability. It's an indispensible tool for anyone toiling in biology's trenches — even the barest life science lab has a qPCR station, and understanding the principles of PCR is a first step in any life science education. Whether you're quantifying the expression of genes or microRNAs, real-time PCR is the gold standard for making rapid and specific measurements.

For this latest installment on real-time PCR, we've assembled another exemplary panel of experts. We put several questions to this impressive team of researchers, and they were each gracious enough to reply with thoughtful and detailed responses. Read on for their advice on measuring relative expression, predicting assay quality, controlling for efficiency, and more. Also, be sure not to miss the resource guide, which includes recommended reading and websites mentioned by the experts below.

— Jennifer Crebs

Index of Experts

Genome Technology would like to thank the following contributors for taking the time to respond to the questions in this tech guide.

Vladimir Beneš
European Molecular Biology Laboratory

Cristina Hartshorn
Department of Biology
Brandeis University

Mikael Kubista
TATAA Biocenter

Michael Pfaffl
Technical University of Münich

Gregory Shipley
University of Texas Health Science Center, Houston

Xiuling Zhang
Wadsworth Center
New York State Department of Health

Q1: How do you measure the relative levels of two genes within the same RNA sample?

When there are two genes, or any number of transcripts in the same RNA sample, we use reference genes for truly relative quantification. We are using — for both defining them or for identifying them — TATAA's reference panel for human genes. They have just recently released a mouse panel.

In order to find suitable transcripts for this task, we are running geNorm, developed by Jo Vandesompele at the University of Ghent in Belgium. Lately, we have started applying an assay called EAR, which is an acronym standing for Expressed Alu Repeats, and which was also conceived by Jo Vandesompele. Currently Jo's EAR is only available for human genes. In a nutshell, it's an almost universal reference, at least for human transcripts, because there are about 1,500 expressed Alu elements in human transcripts. These Alu elements are integrated in 3' UTR mostly, so they are part of the process right from the beginning. The slight disadvantage of this assay is that in the human genome there are more than 1 million of these elements, so you would need to be really careful that you eliminate all genomic DNA. Otherwise, they simply come up, so it's necessary to really care about the removal of genomic DNA, which is, in this particular case, contamination.

So, we start with panels, then we find that in the most of cases we have been successful. If we are not, we try to look into microarray data to find suitable reference genes.

— Vladimir Beneš

We routinely analyze RNA in very small samples, for instance mouse embryos or even single cells recovered from embryos. Because we don't like to subdivide such samples, our approach is to simultaneously quantify both mRNAs of interest. To do so, we perform RT followed by duplex LATE-PCR, an advanced form of asymmetric amplification invented in our laboratory (Sanchez et al., 2004; Pierce et al., 2005). One of the main advantages of this technique is that both the abundant and the rare templates in a sample are amplified with equal efficiency, so that quantification is independent of the relative amounts of the two templates. This is not the case for conventional, symmetric PCR that always favors amplification of the most abundant template and often suppresses signals from low copy number templates. In order to calculate the copy number of each mRNA target we use standard scales built with serial dilutions of genomic DNA. We can do this because all our template sequences are chosen within genes' exons and are thus identical in genomic DNA and cDNA; this strategy also ensures that efficiency is the same for the standard curve and the cDNA amplicons (Hartshorn et al., 2004).

I have been using this approach for RT-real-time-LATE-PCR assays and I employ two sequence-specific probes labeled with different fluors to identify the transcripts that I am interested in. The amplification products of LATE-PCR are single-stranded, which gives me great freedom for probe design and length because the probe is free to bind to the target amplicon without having to compete with a complementary strand. Some of my colleagues are also utilizing LATE-PCR for quantitative end-point analysis, which eliminates the need for real-time analysis. Because the amplification plots generated by LATE-PCR are parallel lines that do not plateau, final fluorescence at a chosen end cycle is proportional to the amount of starting template. Using this strategy we have had some promising preliminary results with DNA templates and are thinking about RNA applications.

— Cristina Hartshorn

To accurately determine the relative levels of two genes in a single sample is very complicated, requiring the determination of RT yields for both RNAs, the PCR efficiencies for both assays (preferably in matrix of the samples), and the relative sensitivity of the two assays (see eq. 5 in Kubista et al., 2006). Much better is to measure the relative expression ratio of two genes in two samples; for example, the relative expression level of the two genes in a test sample compared to the relative expression level in a control sample. In such comparisons most unknowns cancel. For these kinds of measurements it is most important that the samples are similar and that the sample matrices do not inhibit the two assays differently.

— Mikael Kubista

Relative quantification determines the changes in steady-state mRNA levels of a gene across multiple samples and expresses it relative to the levels of an internal control RNA. This reference gene is often a housekeeping gene and can be co-amplified in the same tube in a multiplex assay or can be amplified in a separate tube. Therefore, relative quantification does not require standards with known concentrations and the reference can be any transcript, as long as its sequence is known. Relative quantification is based on the expression levels of a target gene versus one or more reference gene(s) and in many experiments it is adequate for investigating physiological changes in gene expression levels. To calculate the expression of a target gene in relation to an adequate reference gene, various mathematical models are established. Calculations are based on the comparison of the distinct cycle determined by various methods, e.g. crossing points (CP) and cycle threshold values (CT) at a constant level of fluorescence; or CP acquisition according to established mathematic algorithm. To date several mathematical calculation models have been developed calculating the relative expression ratio:

2 delta-delta CT approach

This is the most used application in quantitative RT-PCR. Scientists want to measure the "relative" mRNA expression changes of a target gene on the basis of a not regulated reference or housekeeping gene mRNA. To get a general overview about the physiological expression changes the 2delta-delta CT approach is applied. The "first delta" stands for the normalization according to an internal control, the mentioned reference gene expression. The "second delta" is the expression change compared to a nontreated control. What we get is the "delta-delta CT" level, showing the cycle threshold differences after normalizing to an internal control and to a non-treated control. To calculated the expression difference we have to assume a doubling of the amplicon during PCR in each cycle, therefore we can set 2 as the basis in the following equation => 2delta-delta CT

This method was developed and published by Livak and Schmittgen (2001) and is the state-of-theart relative quantification method.

Efficiency-corrected Pfaffl Method

During the last years we have seen that amplification efficiency is not always constant and is not always 2. Therefore a correction for efficiency changes between target, reference gene, and multiple gene-to-gene comparisons should be applied (Pfaffl, 2001). The so-called efficiency corrected relative quantification model [equation below] revolutionized the mRNA quantification and is increasingly used in academic research and diagnostics.

R= [(Etarget)ACPtarget (control — sample)]/
[(Eref)ACPref (control — sample)]

— Michael Pfaffl

I have always used the standard curve method of quantification. I like this method for the following reasons:

1) Standard curves allow you a way to monitor how well the assay has run every time in every plate by comparing the slope, y-intercept, and r2 values. If these are not consistent from plate to plate, the results from the unknowns are questionable. If there is a problem, what has to be determined is whether there was a problem with making the standard curve itself or if a bad standard curve is reflecting a problem with the assay. Since we have the luxury of robotics to set up our plates, we see few problems with the standard curves themselves.

2) You can derive the number of molecules for each unknown sample in the assay by interpolation from the standard curve. Therefore, each sample has a numerical value independent of any other sample.

3) Having a distinct value for each unknown allows you to apply any number of statistical analyses, which is not so simple with only a fold-difference, and allows you to compare any sample or group to any other sample or group readily. This is particularly useful when looking at data from multiple plates over a long time span.

What some folks don't realize is that the CT values derived from a real-time qPCR experiment, in isolation, are really meaningless as values. The most used alternative method to convert CTvalues into a fold-difference is the ddCT method. This method is only valid if the PCR efficiencies of the assays used are very similar. Most folks performing this method have no idea what the PCR efficiency of their assays is or their lowest limit of detection. These are two critical pieces of information for publication. What I fear is that folks are reporting values for the ddCT inthe literature that are not valid for their assay.

— Gregory Shipley

Q2: How do you predict assay quality prior to experimental evaluation?

In gene courses we teach here, many people are sometimes rather impatient to go straight into qPCR and get started with the instrument. Before doing so, I recommend using software for assay design, such as Primer3, PrimerExpress by ABI, or some of the tools that Qiagen or Roche are offering through their websites to design assays.

It also helps to Blast primers, but I think that regardless of what the theory shows or predicts, they should run RT-PCR and look at it on the gel before they start burning their samples.

The other place that I send people to check primers is Jo Vandesompele's RTPrimerDB, which I have found to be a very useful and very thoughtfully prepared database. But even so, I do say that primers are nowadays no longer prohibitively expensive. Order them, check them, and run them. I think that predictions are fine — they can sort out a lot of noise and you can certainly disregard some primer pairs — but before you see it performing in vitro in your assay, you can't tell.

There are other considerations about preparation, quality, the preparation of total RNA, priming, and consistencies throughout the whole assay. We advocate using the SPUD assay (Nolan et al., 2006) to check the presence of inhibitors in preparation or isolates of total RNA.

— Vladimir Beneš

We do a number of controls to assess the quality of our assays, some of them during assay development and some to test the finalized reaction. During assay development, our main concern is to eliminate mispriming and dimerization of oligonucleotides. This can be a real problem in multiplex reactions, so our laboratory uses Primesafe (available at to prevent this kind of non-specific interaction. We titrate its concentration to obtain optimal amplification of every target sequence in the assay and analyze the PCR products on agarose gels.

In LATE-PCR, amplification is exponential in the initial cycles and only switches to linear amplification once the limiting primer is deplete, just after the CT value is reached. Thus each double-stranded amplicon (specific or not) is sufficiently abundant to visualize on a gel. At the end of LATE-PCR, however, the number of single-stranded molecules is typically 10- to 20-fold higher than the corresponding double-stranded amplicons, so the single-stranded amplicons can be directly sequenced after a simple dilution step (Dilute-'N-Go Sequencing, Sanchez et al., in preparation; Salk et al., 2006). Sequencing is the ultimate proof of product identity and purity and can also be used to detect the presence of mutations or polymorphism. We have proof that up to six amplicons from a single multiplex LATE-PCR can be sequenced by direct dilution (Rice et al., in preparation).

Other parameters used to check the quality of our assays include the slope and CT value of the curves in real-time reactions across a wide range of template copy numbers. Optimized assays should generate parallel curves, at the appropriate CT intervals, for all template concentrations tested. Because the "visible" portion of a real-time LATEPCR assay is linear, any drop in efficiency is readily observed as a decrease in slope.

— Cristina Hartshorn

Primers are validated as far as possible in silico using multiple primer design and evaluation softwares. Secondary structures of the primer binding sites and possible complementarities between primers are studied using for example mFold and NetPrimer. Specificity is tested using Blast. The designed assay is then tested on a model template, which typically is a cloned target sequence or representative cell line. Assay efficiency is determined as well as any complications arising from primer-dimers.

— Mikael Kubista

Before quantification, essential assay proofs have to be done. Amplification history and melting curve of the assay will tell us a lot about the assay performance. Melting curves should have one only major peak, specific for the generated RT-PCR product, at least in intercalating dye assays. If a primer-dimer peak appears, primer optimization should be performed until the peak disappears.

Amplification history curves should be stable (not variable and noisy), should have a steep increase (marker for good PCR efficiency), and should end in a stable and high plateau (standing for a high amount of the generated product and good polymerase performance).

Furthermore, the negative control should have no amplification, or at least a very late amplification (>CT 45) in SYBR Green assays.

Very important is the reproducibility within one run (intra-assay variation) and between day-to-day repeated measurements (inter-assay variability). Here, the variation should not exceed 10-15 percent on a molecular basis, or speaking in CT levels, not more than 0.1 or 0.2 CTs.

— Michael Pfaffl

This would only be an issue for folks that do not run standard curves with their assays. There is nothing wrong with using the delta-delta CT method I mentioned in Q1. For some experimental data it makes a lot of sense. However, it is critical that you know the lowest limit of your assay and the PCR efficiency prior to applying this method of sample quantification. This is true regardless of whether the assay was designed by the user or comes from a commercial source.

The easiest way to get this information is to run a standard curve. If you have a commercial assay, take the first PCR products you make in a real-time experiment, pool those having the most signal and run them through a PCR clean-up kit. Then, get a rough idea of concentration from an A260 reading and make a 10-fold template dilution series over 6-7 logs starting with roughly 1 pg of template at the highest point. Then run the assay again, and put in values for the standards. The values do not have to be calculated, although that isn't that hard to do. Then, calculate the PCR efficiency from the slope (10(-1/slope)) - 1 x 100 = efficiency as a percentage.

Even more important is [asking], "What is the lowest dilution of standard that is still on the linear line?" You will want to analyze this run the same as you will analyze all the subsequent runs in terms of threshold and baseline settings. Then you will have the lowest CT value that is valid for the assay in question. Actually, you can add one more cycle to that value. Empirically, another two-fold dilution always works for any assay. Once you know that CT value, you will not need the standard curve again. You will also know whether the assay you plan to use to normalize your data has a similar slope to your assay(s) of interest.

— Gregory Shipley

First, I look at the melting curves produced for the PCR products. A melting curve showing only a single, sharp peak suggests that the amplification is specific. Second, I look at the amplification curves, which should move upward smoothly in the log phase, and should not have secondary peaks in the plateau phase. Third, I look at the standard curve. If all the standards are on one straight line, which has a slope close to -3.33, the quality and efficiency of the amplification reaction are good. Finally, I check the difference in CT values between duplicate samples, which should be smaller than one-half cycle for the assay to be reliable.

— Xiuling Zhang

Q3: How do you measure and control for efficiency?

I say to everyone I know that the relative expression software tool (REST, Pfaffl et al. 2002) can deal with reduced efficiencies, but the only way to determine efficiency of your assay is to run a standard curve. Before running it, one can't tell anything. Any comparisons based upon assays without determining efficiency, in my eyes, are not fully valid. This is because I still believe — although it may be immaterial — that when you put determining efficiency to some software or algorithm, it can do it and may even be correct, but it is not sufficient. If you know you can compare your assays, then you can find out from your standard curves, and you don't need any algorithms to adjust for shortcomings of the assays or some limitations on a technical level.

I think that the qPCR component is really very robust and reproducible, and there is no need to run many technical replicates on this quantitative bit. People should concentrate on running reverse transcription replicates and then compare those.

— Vladimir Beneš

The slope of real-time LATE-PCR curves give me the first indication of the efficiency of the reaction, even when testing unknown samples where the CT values (determined by the RNA content) cannot be predicted. So, when assaying sets of single cells, samples whose slopes are clearly outside the group can be eliminated based on inefficient amplification. After having used symmetric PCR for years, I was glad to realize that this feature of LATE-PCR is an extremely sensitive indicator of changes in efficiency caused by a number of factors (e.g. mispriming or the addition of different reagents to the PCR mix).

In addition, I utilize biological samples (embryos and embryonic blastomeres) as controls. Taking advantage of the fact that my primers land within exons, I can run "No RT" controls that allow me to detect my genomic DNA sequence(s) in the gene(s) of interest but not in cDNA. I do these tests on single cells, hence my assay has to be sensitive enough to amplify one or two copies of each target, depending if the target gene is on a sex chromosome or on an autosome.

Also, I recently developed a duplex assay for a gene expressed only in female embryos and a gene expressed in both sexes (Hartshorn et al., in preparation). This assay required several optimization steps, until quantification of the mRNA present in all embryo was completely unaffected by the presence or absence of the second, sex-dependent mRNA species, as shown both by average measurements in male and female samples and by the constant slope of the real-time curves. Controls of this kind can be devised according to the characteristics and requirements of different experimental systems.

— Cristina Hartshorn

We usually talk about PCR efficiency without defining what we mean by efficiency. In essence, there is assay efficiency, which is the PCR efficiency measured on a purified template in absence of any inhibitors. The template is usually rather uncomplicated. For a well designed assay the efficiency should be 0.9 or higher. But this efficiency can be difficult to reach in biological samples with a complex matrix. Also, the biological template may be more complicated, for example, being heavily super-coiled, which reduces priming efficiency. The PCR efficiency in the biological sample can be estimated by, for example, in situ calibration (Ståhlberg et al., 2003). But this requires performing a dilution series on each sample, which is costly and time consuming.

A practical approach is to perform this detailed analysis on some representative samples and, if the variation among them is not substantial, determine an average efficiency that is assumed to be representative for the particular samples.

Another possibility is to inspect the fluorescence response curve and identify anomalous samples by kinetic outlier detection (Bar et al., 2003). Internet-based software solution for this kind of quality assurance will soon be available through LabonNet.

— Mikael Kubista

This is a very sensitive topic and we can discuss this for hours. Most applied is, of course, the calibrationor dilution-curve method. From the slope of the curve the efficiency can be calculated. In my eyes a very robust method, but often too optimistic overestimating the real PCR efficiency. We often end in efficiencies higher than 2.0 — up to 2.2 or higher.

How can this be? What is wrong with the method? I do not know up to now, but as we know from nature, no biological reaction is always 100 percent or even more than that. Therefore my workgroup looked deeper in the problem and we came up with single run efficiency estimating models on the LightCycler.

The problem lies in various factors: What happens in the PCR tube and how can we measure it correctly? Therefore the reporter dye, the tube itself, the optical unit, and the cycler fluorescence measurement influence the algorithms. Each cycler platform has its own characteristic fluorescence history and amplification efficiency performance. In the near future we have to adapt for each cycler platform individual algorithms.

More details and available algorithms can be found at

— Michael Pfaffl

For measuring amplification efficiency, I usually make serial dilutions (e.g., 1:10, 1:100, and 1:1000) of an RT sample, a cloned cDNA, or a purified RT-PCR product, and use the series as arbitrary standards. A standard curve is generated using this set of samples by plotting CT values versus abundance of the template in arbitrary units.

To improve efficiency, I usually optimize the reactions by sequentially varying the following parameters: Mg++ concentration, annealing temperature and time, and concentration of other reagents (primers, dNTPs, and polymerase). Increasing Mg++ concentration may increase amplification efficiency, but at a risk of losing amplification specificity. Too high annealing temperature or too short annealing time will result in lower efficiency. If the efficiency is still low after all the above parameters have been optimized, I would then try new primers.

— Xiuling Zhang

Q4: Which computational tools are best for data analysis?

The Relative Expression software tool (REST, Pfaffl et al. 2002), I believe, is very good and very robust. For ranking reference transcripts, we use geNorm and in most of the cases we use ddCT, which is more or less the standard setup.

— Vladimir Beneš

We use GenEx from MultiD Analyses. It's Windows-based, and has a user-friendly, spreadsheet-based preprocessing module that starts with CT data. It is very easy to test the effect of, for example, differential inhibition or assay variability. It has both geNorm and Normfinder, which is nice, because the two approaches to identify optimum reference genes are complementary and suited for different situations. The professional version of GenEx has also very powerful methods for expression profiling and sample classification, including principal component analysis, hieratical clustering, self organizing maps and much more. These methods are very useful for the kind of profiling studies we mainly do today.

— Mikael Kubista

Today the relative gene expression approach is increasingly used in gene expression studies, where the expression of a target gene is standardized by a non-regulated reference-gene or by an index containing more reference-genes (at least three). Several mathematical algorithms have been developed to compute the expression ratio, based on real-time PCR efficiency and the crossing point (CT or CP) deviation (=> delta CP) of an unknown sample versus a control. But all published equations and available models for the calculation of relative expression ratio allow only for the determination of a single transcription difference between one control and one sample.

After developing the efficiency correction algorithm, we set up the Relative Expression software tool (Pfaffl et al., 2002).

New software tools were established, which compare two or more treatments groups or conditions (in REST-MCS), with up to 100 data points in sample or control group (in REST-XL), for multiple reference genes and up to 15 target genes (in REST-384). The mathematical model used is based on the correction for exact PCR efficiencies and the mean crossing point deviation between sample and control group(s). Subsequently the expression ratio results of the investigated transcripts are tested for significance by a Pair Wise Fixed Reallocation Randomization Test and plotted using standard error estimation via a complex Taylor algorithm.

Several updates for lots of applications are available. REST software applications are freely available online.

— Michael Pfaffl

There are two levels of post-run data analysis. The first is to establish the baseline and threshold settings for the assay and run. Most folks these days let the instrument do that function. That is not a good idea for two reasons. One, in an as-yet-unpublished study done by the Nucleic Acids Research Group within the ABRF [Association of Biomolecular Resource Facilities], we compared the results from the same standard curve, reagents, chemistry, and person, analyzed by the software that comes with the nine different real-time instruments used for the comparison. What we found was that none of the software packages gave an optimal analysis alone regardless of the settings used. Only when we analyzed them manually did we get the best standard curves and the most comparable data from instrument to instrument. This brings up another point — and that is that you should determine during assay QC what the baseline and threshold settings for an assay will be and then stick to those settings throughout the use of that assay. The baseline can be moved slightly to accommodate differences in sample or standard amounts but the threshold should stay constant. If you let the software determine threshold and baseline, they will change slightly from run to run and your data will not be as comparable, especially if you are using the ddCT method.

The second software group includes those used to analyze the data once you have done a good job with the initial post-run analysis described above. For that I like GenEx from MultiD, and before that geNorm and BestFit. I'm sure everyone uses Excel in one way or another, and I like Prism for statistics and making graphs.

— Gregory Shipley

Q5: What is the minimal assay information you should report?

I think there should be sequences of the primers, and there should be efficiencies of the primers or of the assay. The information about the primer is not only sequence, but it is also the target sequence of the amplicon. Then it can be made clear — let's say by accession number — which sequence was used to design the assay, not only primers.

I think if you provide efficiency information, it is much more telling in the article because this information is more complete than if you provide just primer pairs. It would then perhaps be easier to take these primers and use them right away. However I do advise caution — validate [primers] before going and using them at their face value. But if I see that efficiency is at 99 percent, I think it's good value.

In short, I think that the minimal assay information should include: sequence, accession number of the target sequence on which the primer or assay was designed, and efficiency of the assay.

— Vladimir Beneš

I think that all details, including the thermal profile used, should be reported in scientific papers, although commercially available products can be cited.

The thermal profile is not always included in papers but it is fundamental to reproduce results. If the assays are part of a commercial kit, it is not necessary to reveal the content but I personally prefer to purchase kits (for instance for RT) that include the buffer composition, etc. This knowledge is very helpful if one needs to introduce modification to the suggested protocols, which is almost always the case in a research lab.

— Cristina Hartshorn

This question is not as simple as it may sound, because all the information that one would like to have about an assay is not always available, and we cannot expect companies to make it available. But, in essence, enough information should be provided that makes it possible to reproduce the experiment.

This includes sampling details, such as how the sample was taken, in what medium it was collected, and after what time it was placed there; other storage information, including any changes in temperature etc. during transportation; detailed extraction protocol, and detailed protocol for reverse transcription. The latter should include the enzyme and primer strategy used, since it has a profound effect on the yield (Ståhlberg et al., 2004). The qPCR primer and probe sequences should be presented, if available, or the catalogue number for commercial assays, experimental conditions including primer, probe/dye, dNTP, Mg2+ concentrations, and for home made assays and whenever possible buffer conditions.

For SYBR and BOXTO assays, concentration of stock solution should be provided. If spikes were used, their sequences should be provided, with information on how and when the samples were spiked. Assay efficiency should be given, as well as typical efficiency for the sample matrix. If data are normalized with reference genes, the basis for choosing those reference genes should be provided. This means that either the author should have validated the reference genes or a reference is given to a study that validated the reference genes for the particular samples. There are today convenient panels available for validation (see for example: TATAA's gene panels), and the time has passed when studies gave shaky data because of the usage of improper reference genes.

— Mikael Kubista

If you made the assay yourself or had it made commercially, you should know everything about that assay. In that case you should present:
1) the NCBI name of the transcript (or gene) and any synonyms that may be more commonly recognized
2) the accession number of the sequence used for assay design or a reference to the sequence used if it isn't in the NCBI database
3) the sequence of the primers and/or probe used in the assay including numbers referring to the position in the sequence of the 5' base in the sequences
4) the length of the amplicon
5) the PCR efficiency from your empirical data
6) the lowest limit of detection for the assay from your empirical data

If you purchased a commercial assay, you should present:
1) the NCBI name of the transcript (or gene) and any synonyms that may be more commonly recognized (from the datasheet)
2) the accession number of the sequence used for assay design or a reference to the sequence used if it isn't in the NCBI database (from the datasheet)
3) the catalogue number of the purchased assay
4) the PCR efficiency from your empirical data
5) the lowest limit of detection for the assay from your empirical data

All of this information can be easily presented in a table. It will also put the editors at ease so they can concentrate on your data and not on the assays from which the data was derived.

— Gregory Shipley

1) RNA isolation method
2) Reverse transcription method
3) Genomic DNA removal method and controls used
4) Primer sequence
5) Components of amplification reaction and temperature program
6) Quantification standards information
7) Real-time PCR system and kit used
8) Calculation method

— Xiuling Zhang

List of resources

Our RT-PCR experts referred to a number of publications and Web tools, which we've compiled in the following list.


A-Z of Quantitative PCR
Stephen A. Bustin, ed.
International University Line (July 2004)

PCR (The Basics)
McPherson MJ, Moller SG.
Taylor & Francis; 2nd edition (March 30, 2006)

PRINS And in Situ PCR Protocols (Methods in Molecular Biology)
Franck Pellestor, ed.
Humana Press; 2nd edition (April 21, 2006)

PCR Troubleshooting: The Essential Guide
Michael Altshuler
Caister Academic Pr (June 30, 2006)


Hartshorn C, Rice JE, and Wangh LJ. (2004) Optimized real-time RT-PCR for quantitative measurements of DNA and RNA in single embryos and blastomeres. In: Bustin SA, ed. A-Z of Quantitative PCR. International University Line: La Jolla; pp.675-702.

Kubista M, Andrade JM, Bengtsson M, Forootan A, Jonak J, Lind K, Sindelka R, Sjoback R, Sjogreen B, Strombom L, Stahlberg A, Zoric N. (2006) The real-time polymerase chain reaction. Mol Aspects Med. Apr-Jun;27(2-3):95-125.

Nolan T, Hands RE, Ogunkolade W, Bustin SA. (2006) SPUD: a quantitative PCR assay for the detection of inhibitors in nucleic acid preparations. Anal Biochem. Apr 15; 351(2):308-10.

Pattyn F, Robbrecht P, De Paepe A, Speleman F, and Vandesompele J. (2006) RTPrimerDB: the real-time PCR primer and probe database, major update 2006. Nucleic Acids Res, 34(Database issue): D684-8.

Pfaffl MW, Horgan GW, Dempfle L. (2002) Relative expression software tool (REST) for group-wise comparison and statistical analysis of relative expression results in real-time PCR. Nucleic Acids Res. May 1;30(9):e36

Pierce KE, Sanchez JA, Rice JE, and Wangh LJ. (2005) Linear-After-The-Exponential (LATE)-PCR: primer design criteria for high yields of specific single-stranded DNA and improved real-time detection. Proc Natl Acad Sci USA, 102: 8609-8614.

Salk JJ, Sanchez JA, Pierce KE, Rice JE, Soares KC, Wangh LJ. (2006) Direct amplification of single-stranded DNA for pyrosequencing using linear-after-theexponential (LATE)-PCR. Anal Biochem 2006, 353:124-132.

Sanchez JA, Pierce KE, Rice JE, and Wangh LJ. (2004) Linear-after-the-exponential (LATE)-PCR: an advanced method of asymmetric PCR and its uses in quantitative real-time analysis. Proc Natl Acad Sci USA, 101: 1933-1938.

Ståhlberg A, Åman P, Ridell B, Mostad P, and Kubista M. (2003) Quantitative Real-Time PCR Method for Detection of B-Lymphocyte Monoclonality by Comparison of {kappa} and {lambda} Immunoglobulin Light Chain Expression Clin Chem, 49: 51-59.

Ståhlberg A, Håkansson J, Xian X, Semb H, and Kubista M. (2004) Properties of the Reverse Transcription Reaction in mRNA Quantification. Clin Chem, 50: 509-515.


Endogenous Control Gene Panels (TATAA Biocenter)

Gene Quantification web site (edited by Michael Pfaffl)

GenEx software


LabonNet, Ltd.





REST applications



Many thanks again to Xinxin Ding of the Wadsworth Center for advising on the answers submitted by Xiuling Zhang.

Index of Names

Locate expert advice across all three volumes of the qPCR reference guide series using the index below.

Adams, Scottie, I: 5, 7, 11, 13, 15
Andersen, Claus Lindbjerg, I: 5, 7, 11
Beneš, Vladimir, I: 5, 7, 9, 11, 13, 15, 16; III: 5, 7, 10, 13, 17, 18
Bustin, Stephen, I: 5, 9, 11, 15, 16, 17
Ding, Xinxin, II: 19; III: 22
Hartshorn, Cristina, II: 5, 7, 12, 15, 17; III: 5, 7, 9, 10, 13, 18
Huggett, Jim, II: 5, 7, 11, 12, 15, 17
Hunter, Tim, II: 5, 7-8, 11, 12, 15, 17
Kubista, Mikael, I: 5, 9, 11, 15-17; II: 5, 8, 11, 13, 15, 17, 21-22; III: 5, 9, 11, 13-14, 17, 18, 20
Levy, Shawn, I: 5, 9, 15, 16, 17
Pfaffl, Michael, III: 5, 9, 11, 14, 17, 19
Shipley, Gregory, III: 5, 7, 11, 17, 19, 20
Vandesompele, Jo, I: 5, 9, 15, 17; II: 5, 11, 13, 21, 22; III: 10
Wong, Marisa, II: 5, 13, 21, 22
Zhang, Xiuling, II: 5, 13, 21; III: 5, 14, 19, 20
Zianni, Michael, I: 5, 9, 15, 17