Skip to main content
Premium Trial:

Request an Annual Quote

Cellumen to Use CellCiphr Platform To Profile 50 Compounds for NCTR

Premium
Cellumen announced this week that it will use its CellCiphr technology to profile blinded samples of 50 compounds, including failed and marketed drugs, for the National Center for Toxicological Research, a research center of the US Food and Drug Administration.
 
The NCTR will incorporate the resulting data into a liver toxicity knowledge base under development, an NCTR official told CBA News last week. A Cellumen official said that in the meantime, the company will use the profiling and compound safety data generated through the collaboration to further develop the diversity in its CellCiphr database and cell panels, and develop its classifier informatics tools.
 
The purpose of the project, which could eventually grow to 200 compounds, “is to validate the CellCiphr panels around liver toxicity,” said Donald Taylor, Cellumen’s senior director of marketing and corporate development. He told CBA News that Cellumen will run each of the compounds through its CellCiphr panels, and return to the NCTR a safety risk index for each compound.
 
“The intent is that the safety risk index we produce via CellCiphr corroborates the known in vivo toxicology, so that the NCTR can verify and validate the predictivity of Cellumen’s CellCiphr product,” said Taylor.
 
From there, Cellumen expects to run more compounds through the existing CellCiphr panels and, in the future, have the NCTR run more compounds through an expanded set of CellCiphr panels.
 
“That would be a goal for a potential [second phase of the deal], but that is not a part of this initial collaboration,” Taylor said. He added that Cellumen sees this collaboration as the first phase of a longer-term partnership between the company and the NCTR.
 
“You have to start somewhere, and since hepatotoxicity represents such a large percent of toxicity, it just makes sense to start there,” he said. 
 
Financial terms of the deal, funded by the FDA’s Critical Path Initiative, were not disclosed.
 
The project has two goals: to provide a cytotoxicity profile for the 50 chemicals that could be used for the liver toxicity knowledge base, and to provide information on how to choose the dose for subsequent genomics, proteomics, and metabolomics experiments, said Weida Tong, director of the NCTR’s Center for Toxicoinformatics.
 

“You have to start somewhere, and since hepatotoxicity represents such a large percent of toxicity, it just makes sense to start there.”

Tong said he “expects” to finish generating data by the end of 2009: Cellumen “need[s] to finish [generating data on] the 50 compounds by the end of this year,” he said, adding that the -omics experiments will take another year to complete.
 
Tong said that to be selected, compounds must have well-characterized hepatotoxicity profiles and must be pure chemicals so that they can be used for analysis.
 
“We want to understand the hepatotoxicity mechanism really well, so we tend to choose compounds with very well-characterized hepatotoxicity mechanisms,” he said. “Although this is very difficult, it is actually one of the primary criteria for compound selection.”
 
This is very reasonable, he said, because the data generated from these compounds will be used to validate in silico results. “So we need to understand the compounds very well before using any data to validate anything,” added Tong.
 
Liver Let Die
 
In part because so many either compounds fail early trials or are removed from the market due to liver toxicity, Tong said that the NCTR plans to develop a liver toxicity knowledge base to integrate all available information from public resources, such as PubMed, into a single repository that regulators can use in the review process.
 
“As we began to gather this information, we began to think it was more on the bioinformatics side, and would require a more in silico type of approach,” said Tong. He added, however, that “we really wanted these results to be validated by experimental data.”
 
This is where the collaboration with Cellumen comes into play. The NCTR is trying to generate the genomics, proteomics, and metabolomics data for, at the beginning, 50 compounds. Those compounds could be drugs on the market or those withdrawn from the market, said Tong.
 
The list will eventually be expanded to 200 compounds in the next phase of the collaboration.
 
The NCTR’s efforts follow similar projects underway at other government agencies. In April 2007, the US Environmental Protection Agency invited four cell-based assay companies, including Cellumen, to participate in its just launched ToxCast program, which aims to discern how chemicals such as pesticides will interact with the environment, humans, and animals (see CBA News, 4/13/07).
 
The EPA feels that as the program grows, so too will researchers’ confidence in using potential mechanisms of action to help predict toxins. This new information, in turn, could help to refine and reduce the use of animals in toxicity testing.
 
And this past February, three large US government agencies announced that they plan to establish a collaborative research program to overhaul how environmental chemicals are tested by shifting the focus away from animal testing and toward cell-based or biomolecular assays and computer models (see CBA News, 2/15/08). 
 
The program is spearheaded by the National Toxicology Program, the National Institutes of Health Chemical Genomics Center, and the EPA’s National Center for Computational Toxicology.
 
These efforts dovetail nicely with those of Cellumen and the NCTR. As a matter of fact, the NCTR has had two teleconferences with [the EPA] to try to coordinate its efforts with those of [the EPA’s] ToxCast program, as well as those of the NCCT’s Virtual Liver project, said Tong.  
 
He added that the FDA is still in the process of talking with the EPA, and that “several ideas are being exchanged,” one of which is to put an official mechanism in place to allow both agencies to work together to share data.
 
“Although we are still working on it, it is likely we will eventually have a memorandum of understanding in place for such a collaboration,” Tong said.  

The Scan

Study Tracks Off-Target Gene Edits Linked to Epigenetic Features

Using machine learning, researchers characterize in BMC Genomics the potential off-target effects of 19 computed or experimentally determined epigenetic features during CRISPR-Cas9 editing.

Coronary Artery Disease Risk Loci, Candidate Genes Identified in GWAS Meta-Analysis

A GWAS in Nature Genetics of nearly 1.4 million coronary artery disease cases and controls focused in on more than 200 candidate causal genes, including the cell motility-related myosin gene MYO9B.

Multiple Sclerosis Contributors Found in Proteome-Wide Association Study

With a combination of genome-wide association and brain proteome data, researchers in the Annals of Clinical and Translational Neurology tracked down dozens of potential multiple sclerosis risk proteins.

Quality Improvement Study Compares Molecular Tumor Boards, Central Consensus Recommendations

With 50 simulated cancer cases, researchers in JAMA Network Open compared molecular tumor board recommendations with central consensus plans at a dozen centers in Japan.