NEW YORK (GenomeWeb) – An international team of academic and industry researchers this week published the results of a comparison of commercially available platforms for microRNA expression analysis, finding that each has its own strengths and shortcomings that should be considered before selecting one for a particular study.
As more and more data link miRNAs to almost all aspects of cell biology, there is a growing interest in using their expression patterns — particularly in biofuids — as biomarkers of health and disease. Yet accurately and reliably analyzing miRNA signatures has been a challenge.
A number of different technologies exist for studying miRNA expression, but "quantifiable performance metrics for these platforms are often ill-defined or simply non-existing, which hampers informed selection of the most appropriate method," the report's authors wrote in Nature Methods.
To address this, the scientists undertook a miRNA quality control study that involved all major vendors of miRNA profiling technologies on the basis of hybridization, sequencing, and RT-qPCR. All the vendors agreed with the study design, sample selection, and data-analysis methods.
In total, the study focused on 12 platforms from nine different vendors, with each platform profiling an identical set of 20 standardized positive and negative control samples including human universal reference RNA, human brain RNA and titrations thereof, human serum samples, and synthetic spikes from miRNA family members with varying homology.
The PCR-based platforms studied were Exiqon's miRcury system; Life Technologies' OpenArray, TaqMan Cards, and TaqMan Cards preamp; Qiagen's miScript; Quanta Biosciences' qScript; and WaferGen's SmartChip. Hybridization-based platforms in the study included Affymetrix's microarray; Agilent's microarray; and Nanostring's nCounter. Illumina's TruSeq and Life Technologies' Ion Torrent were also tested.
The scientists developed quality metrics to objectively assess the performance of each platform in terms of reproducibility, sensitivity, accuracy, specificity, and concordance of differential expression.
Among the findings of the study were substantial interplatform differences when evaluating differential miRNA expression, according to the paper.
"As different technologies are often applied for validation purposes, the choice of platform could dramatically impact the validation rate," the team wrote. "With an average validation rate for differentially expressed miRNAs of only 54.6 percent between any two platform combinations, we strongly advise that screening studies are followed by targeted validation using an alternative platform or technology."
Further, they found the platforms based on the same technology can perform very differently, most obviously for reproducibility and specificity among qPCR platforms. "In contrast, sensitivity is very much technology-related with qPCR platforms having an overall better score, especially when it comes down to low input RNA samples." This sensitivity, they added, is accompanied by high accuracy, which results in reliable quantitative measurements.
There were also some surprising results from the study, the investigators wrote in their paper. These included low specificity for several platforms; low concordance of differential expression; poor titration response and lack of reproducibility for several qPCR platforms, which implied that these qPCR platforms should not be used to quantify small changes in expression in small size sample cohorts; the observation that some performance parameters are technology-related whereas others are platform-related; and the strong and significant inverse correlation between sensitivity and specificity.
In the end, each platform has certain strengths and weaknesses, which suggests platforms should be chosen on the basis of the experimental setting and the specific research questions, the team concluded.