Skip to main content
Premium Trial:

Request an Annual Quote

NIA Study Assesses Reproducibility, Sensitivity of SomaLogic's SomaScan

Premium

NEW YORK — A team of researchers at the National Institute on Aging (NIA) have completed an assessment of the performance of SomaLogic's current SomaScan proteomics assay.

Detailed in a paper published last month in Nature Scientific Reports, the team's assessment found that the SomaScan assay, which measures 7,288 human proteins, is highly reproducible and sensitive.

The study did not, however, address questions around the platform's specificity and how well its measurements correlate with orthogonal measurement approaches, both of which are longstanding concerns, said Maik Pietzner, a bioinformatician at the MRC Epidemiology Unit at the University of Cambridge School of Clinical Medicine and a SomaScan user.

Pietzner, who was not involved in the NIA study, said that to his mind the paper's most significant contribution was its analysis of the SomaScan data normalization process, which he said could help researchers better customize the company's recommended pipeline to the needs of their particular samples and datasets.

The SomaScan platform uses SomaLogic's Somamer reagents, a modified form of aptamers, nucleic acid-based reagents capable of binding proteins or other targets. Following binding to their protein targets, they can be read out via technologies like PCR, microarrays, or next-generation sequencing.

In the Nature Scientific Reports study, the researchers looked at SomaScan data from 2,050 samples across 22 different sample plates including inter-plate technical duplicates from 102 individuals. Using a grid-search approach to assess the variability of the assay, they found it to be highly reproducible, with the large majority of the 7,288 protein measurements having coefficients of variation of below 10 percent once the data had been fully normalized.

The NIA researchers also found the platform to be highly sensitive, observing that in the experimental samples they looked at only seven protein targets fell below the assay's limits of detection. They noted, however, that when they looked at controls consisting of non-cleavable, non-biotin, and spurious Somamer reagents that are not expected to bind to protein targets, they observed in some cases signal well above that detected in samples containing only buffer. This, they wrote, indicates measurements at the platform's lower limits of detection might be "unreliable due to background noise," which they suggested could reduce the platform's advertised 10 orders of dynamic range by one to two orders of magnitude.

The NIA team declined to be interviewed about the study, noting that agency leadership did not want any potential comments outside the scope of the paper to be seen as a federal government endorsement of SomaLogic.

In addition to looking at the performance of the assay, the researchers analyzed the normalization process SomaLogic recommends users apply to data generated on the platform.

The company's data normalization process aims to correct for sources of analytical variation across experiments, said SomaLogic Chief Medical Officer Stephen Williams.

"If a robot over-dilutes a sample by 10 percent, for example, you'd like to correct for that afterwards," he said.

To allow for this correction, the company uses measurements it has made on a set of samples in an external reference population. In the study, the NIA team presented what they described as an independent normalization pipeline that users can run without having to use SomaLogic's external reference datasets, which, they said, could allow researchers to better tailor the normalization process to their specific samples sets and study objectives.

Pietzner said that the researchers' exploration of "this very technical normalization procedure" is "really helpful," as it would allow users to "get away from relying entirely on SomaLogic for these normalization steps" and instead to "have almost an in-house bioinformatic pipeline that customizes [the process] to your data."

He said, however, that questions still remain about if and when the normalization process may negatively affect the ability of researchers to detect biological signal in samples or correlate their findings with those from other platforms.

For instance, Pietzner said that he and his colleagues have found that their non-normalized SomaScan data correlates better than their normalized data with measurements collected on the same samples by other platforms, specifically Olink's Explore platform.

On the other hand, they have found in experiments looking for protein quantitative trait loci, or pQTLs — links between genetic variants and plasma protein levels — that data normalization significantly improves the SomaScan platform's detection of cis pQTLs, pQTLs that are located close by the gene that encodes that protein.

More subtle signals, though, are often lost following normalization, Pietzner said. Whether or not that is because those signals are, in fact, due to analytical as opposed to biological variation "is very hard to answer," he said.

"To be honest, we don't have a good answer," he said. "What we have been doing lately is being very pragmatic, saying, OK, we will run our analysis on both [normalized and non-normalized] datasets and report those [findings] that are consistent."

The Nature Scientific Reports study only briefly addressed questions around the specificity of the SomaScan platform — a concern that has been commonly raised regarding SomaLogic's technology due in large part to the fact that unlike conventional sandwich immunoassays, it uses only one capture agent per protein target. The company has worked to combine an additional level of specificity into its reagents by developing the molecules so that they exhibit slow dissociation rates from their targets, on the order of an hour or more in the case of true hits and fast dissociation, and on the order of seconds in the case of off-target binding.

Several papers cited by the NIA researchers as addressing the assay's specificity show mixed results. For instance, recent work by a Johns Hopkins University team comparing SomaScan measurements to traditional immunoassays for measuring nine markers of kidney disease found strong correlations between measurements for four proteins, weak correlations between measurements for two proteins, and no correlation between measurements for three proteins.

A study of participants in the Atherosclerosis Risk in Communities (ARIC) Study led by another group of Johns Hopkins researchers found that in the case of nine chronic disease markers, six SomaScan assays correlated well with clinical immunoassays, while three correlated moderately.

Regarding the specificity question, Williams said that the company has been using mass spectrometry to orthogonally validate its assay library and that it was putting this data on a menu query tool where researchers can look up what validation has been done for each assay target.

"If this was a systematic and deep problem, then among the number of things that have been orthogonally validated we would have found a load of mistakes, and we haven't," he said. "I think it is a bit of a red herring."

Pietzner said, however, that from his perspective, uncertainty around the platform's specificity remained "a massive issue."

"We need to be very sure that we are looking at the right protein," he said.

He noted that while positive findings can be validated further using other methods or by comparing against other datasets or existing literature, the question of associations missed due to specificity issues is a tougher one.

"We are quite aware that these experiments have to be considered exploratory," Pietzner said. "I think it is similar to the first generation of microarrays for transcription where people don't entirely trust them. I think it's early days to say that these 7,000 proteins or the 3,000 Olink measures are all like ground truth measures of a protein."