Skip to main content
Premium Trial:

Request an Annual Quote

Roundtable: Pharma Sees Promise — and Many Obstacles — in Systems Toxicology

Premium
Part one in a two-part series.
 
BOSTON — With safety issues at the top of pharma’s mind, drug-discovery firms are turning to omics technologies with hopes of identifying toxic effects that traditional toxicology methods may miss.
 
But despite these new approaches, toxicology is still a far cry from a purely predictive science. While genomics, proteomics, metabolomics, and other approaches may add additional insight to the information gathered from conventional toxicity tests, drug developers are still struggling to find ways to use that new information effectively.
 
Last week, during Cambridge Healthtech Institute’s Bio-IT World conference, BioInform sat down with representatives from two pharmaceutical firms and an informatics vendor to discuss the current state of systems toxicology, how it can benefit drugmakers, and where it still fails to deliver. A transcript of the discussion, edited for length, follows.
 
Participants:
 
Maryann Whitley (MW): associate director of bioinformatics, Wyeth Research
 
Jim Xu (JX): associate research fellow, Pfizer Global Research and Development
 
Kurt Zingler (KZ): head of US business, Genedata
 

 
BioInform: How would you define systems toxicology? How predictive of a science is toxicology right now, and where would you like to see that going forward?
 
KZ: From Genedata’s perspective as a technology provider, it’s really about trying to bring all the sources of information together. There’s a lot of very good experimental toxicology and biology that’s out there, and people have for years been doing microarrays and PCR, and people are beginning to do more metabolomics and proteomics and things like that, but for the most part, when we go out and talk to our customers, those are all on separate systems, meaning that it’s difficult to access for a toxicologist or a toxicoinformaticist to approach it, and even more difficult to try and do comparisons across those technologies.
 
One of the big initiatives that we’re involved in is the InnoMed predictive toxicology consortium in Europe. In that case, they’re actually going back and looking at drugs that have failed that did not come up in any of the classic toxicology screens or microarrays that everyone’s done, and basically looking at the breadth of omics and conventional toxicology and saying, ‘Is there something that we can see in these other technologies or maybe all these technologies together that will enable us to pinpoint what went wrong?’
 
MW: That actually hits on one of the issues that I see from the pharma side. I think we’ve done a pretty good job as an industry of building toxicogenomics databases; I think the algorithms are pretty good at doing the prediction. I think the problem is that our compounds don’t always match what we’ve profiled in the database.
 
And it’s not entirely surprising, because as a pharmaceutical industry, we’re always looking for new chemical entities and new chemical equity, and we’ve seen cases where we clearly have toxicity in the animal models, and the molecule will come back clean from some of these predictive mechanisms. Just as Kurt referred to the compounds that looked clean and then caused some kind of toxic response in the clinic, we have cases where it’s clearly toxic in the animal models and the predictive methods completely miss it, or can’t identify a mechanism of action, which is usually where we’re going with this type of a technology — more for mechanism of action once we’ve seen a toxic response. But we’ve seen cases where even to try and just get a mechanism of action, the model will actually show no toxicity at all, so you’re now sitting there with a predictor of no toxicity in an animal that is clearly, by histopathology, a toxic response.
 
I don’t think that’s surprising because we’re building the databases against known compounds, and as we stretch our chemical libraries into areas that are not represented in those databases, we’re going to get false negatives.
 
So for me, my concern is, I think the technologies are there, but I’m not quite clear yet how it fits into the pharmaceutical pipeline. The cost/value equation is not clear to me yet.  
 
JX: We are at the state where we have so much data — we have in vitro data, in vivo data, omics data, et cetera — and we don’t have a quantitative measure of, ‘This prediction and this prediction give us this much sensitivity and this much specificity.’ A lot of this is experimental design.
 
So we’re retrospectively studying maybe 50 drugs. I wonder sometimes whether those are powered sufficiently enough to identify even a subset of animal liver tox, because liver tox is a manifestation of many different diseases — fatty liver, necrosis, apoptosis — all of which have different omics associated with that phenotype. At the moment, I’m willing to step back and focus on a specific phenotype such that we power the experimental design sufficiently enough to really understand, not the genes, but the pathways leading up to that single, very homogeneous specific phenotype.
 
I think that is the direction that we ought to go, and then build out from there, as opposed to saying, ‘I’m just going to assemble 50 liver toxic drugs, put them into animals, and then hopefully we will have a signature that will be able to predict novel chemical entities.’ We’re far from that, so I really want to step back and provide a more realistic view of where we can go forward.   
 
KZ: When you guys do — you said 50 drugs that are liver toxic — are there 50 compounds to do fatty liver or steatosis, where you’re focusing in on a particular type of toxicity, but you still have enough compound breadth that you’re not looking at pharmacologic effects?
 
JX: That’s a very good question. The best experimental design is actually taking chemicals from the same class, and some of them have that effect and some of them do not. Once you have a subset like that, you don’t need a lot of drugs. Your position is limited by this particular area, which is OK, but as long as you have confidence in this particular area that drug ‘A’ perturbs this pathway and drug ‘B’ does not, and that is dissociated from the pharmacology, you have a way forward.
 
That has been the most successful [approach], at least in our experience — not the pan-omics kind of picture, but the most distinct, specific phenotype.
 
MW: I agree, but I think there is even still a problem there. We’ve kind of taken that more targeted view, as well, but coming back again, not only do you need chemical equity, but you need targets. So dissecting out what is [a] pharmacological effect and what is [a] toxicology effect is difficult if you don’t really have that true negative.
 
If you’re looking for a drug that will inhibit target ‘A,’ and everything that you put into the rat models is toxic, you can’t distinguish toxicity from pharmacological effect, even if that target shouldn’t be in the liver. You can argue, ‘Well, that target shouldn’t be in the liver; that should not be a pharmacological effect,’ but until you actually come up with a compound that’s not toxic and hits the target, you don’t have a negative control.
 
JX: Yeah, we had an experience actually where we were lucky enough to find structural analogs that are inactive towards the pharmacological target, and we then asked the project team to redesign studies, which is not a traditional study. You wouldn’t put an inactive compound in an animal, but that’s the only way to get it.
 
MW: That’s right. We have done that, too.
 
JX: It’s two compounds and you ask a specific question, and then bring together all the technologies — the omics, the histopathology, et cetera. So we have all these tools at our disposal, but we are almost going back to traditional ways of investigative tox, and then using the holistic toolbox as a way to redefine systems toxicology.
 
KZ: Have you tried RNAi? Something that turns off the gene, but presumably wouldn’t have a toxic effect? It would silence whatever target you’re going after.
 
JX: What we’ve found is sometimes that works, sometimes it doesn’t, and really because knocking down a protein by 80 percent, which is beautiful in vivo, is different than inhibiting that protein using a small molecule. So the protein is still there, but just binding the inhibitor so that it can’t do its particular function, but it still does other things. Knocking it down just abolishes everything.
 
Another thing is, even with RNAi, there are off-target effects.
 
KZ: A good number of the customers that we’ve spoken with have said that instead of looking for a compound that’s wholly toxic and doesn’t have a pharmacological effect, what they’ll do is try and gather enough compounds that have the same toxicity, but they have completely different pharmacologic effects. So there are 50 compounds that cause toxicity. And they target all these different pathways, all these different diseases, and so when they build their classifier, their predictor for toxicity, in fact all those pharmacologic effects are one or two points of that 50, so it essentially becomes part of the noise, and the real effect you’re seeing is the toxicity.
 
But even in those cases, I think with the technology that they have now, the issues that are starting to trouble them are, ‘Are we seeing a cause? What is the initial point of this toxicity? Or are we seeing apoptosis, necrosis, inflammation, all this other stuff that is just a more general sign that we’ve caused some problem early on that may not be specific enough to identify what’s going on for the mechanism?’
 
JX: So in the consortium that is studying drugs that have failed, do they converge into certain pathways where you can say, ‘OK, the drugs failed because we failed to recognize some omics response?’
 
KZ: The data is just starting to come out of the consortium. There are two model compounds, and there are 14 compounds that failed later, that like you said, passed through all the other tests that we have for toxicity.
 
The goal of InnoMed is to say. ‘OK, these went through, for the most part, conventional tox. Some of them also did microarray analysis. But now if we do this whole breadth of analysis, do we see something when we do NMR for metabolomics, or do we see something when we do LC/MS for metabolomics, or 2D gels? Are there other systems that will bring up the red flag?’
 
The study itself is actually the first phase of a much longer study. The first phase is which technologies might be appropriate. And the idea after that is to build a database of hundreds of thousands of compounds that might have 50 that are steatosis, 50 that are fatty liver, and then people can actually come in with a single compound and one by one go through: does it look like this, does it look like this, running that same battery of tests.
 
The idea of the EU in getting all the pharma companies involved was that there are all these technologies out there, but there hasn’t been a company yet to invest a great deal of money to try all these technologies and then follow what would be the best path to go through with them.
 
MW: I think those compounds that pass through the traditional toxicity screens are actually the highest hurdle for predictive tox. Not just to identify them, but to convince your champions of that compound that it’s going to be a problem. Because they’re looking at the traditional toxic screens and saying, ‘It’s clean. What are you telling me with this model? It’s a false positive.’
 
I see also an issue here with just the perception of what predictive toxicology can tell me. It goes back to the sensitivity and specificity question. Because if you can’t tell the scientist who’s running that program, ‘We have a hit here with 95 percent likelihood of true positives,’ and they have no other data to support that that molecule is a problem, they’re going to say, ‘Yeah, right. Go away. We’re going forward.’
 
I see a problem with how this fits into the drug pipeline. I think if you have a program that’s experiencing some toxicity problems, then going at it with a lot of different technologies and very specific models makes sense to try and move that program forward. As a screening modality, there are up-front costs to generate these large databases, and then [you have to get] over this whole perception of, ‘It looks clean in all my typical toxicity models, but the mathematical model says it’s going to cause fatty liver. I don’t see fatty liver.’
 
I think that’s a huge issue in terms of being able to use this data to help drive decisions.
 
BioInform: Do you have a mechanism for feeding that knowledge back into the mathematical models? If you do learn that a compound is actually toxic, can you feed that back into your classifiers to strengthen future predictions?
 
MW: I’m not sure how the data providers are handling that. We usually send our data to one of the data providers if we’re looking for that type of a model, and I’m not sure what the typical agreements are for how they would incorporate that data into their database.
 
BioInform: So Iconix or one of those companies?
 
MW: Yeah, Iconix, Gene Logic, Genedata. I’m just not aware of what those contracts look like. Internally, we have not developed that kind of a large database.
 
KZ: With the InnoMed initiative, that’s the whole reason for the consortium, because there is so much cost that 15 companies with the EU can actually bear a much bigger cost.
 
The second thing is that we’re approaching it with a different vision — that these are 14 compounds that we know failed, and the job is not to push them through to get drugs. The job is to figure out how and why they failed. That’s a very different job than you’re faced with, because you’re the bad guy. It’s like, here’s my really great compound and you’re the guy who’s going to take away the punchbowl.
 
Part of what this consortium is meant to do is to give you the ammunition to say, ‘Here’s my model, but here are other compounds that have gone through and we get the exact same results, and here is another test I can do that will prove to you that we are going to see this problem in the clinic, or maybe that we’ve had these kinds of problems and here are the things we’ve done to alleviate them.’
 
MW: Coming back to the sensitivity and specificity question, 14 compounds is not a lot of data.
 
KZ: It’s definitely not big enough to build a predictive database. We don’t have 50 or 500 compounds, so we’re going to get some false reasons for why these things are toxic, but I think it will provide the tools to go back and then look at a broader range of compounds knowing that we see something different with NMR, or we see something different with SELDI, and so on.
 
JX: Do you also have 14 drugs that succeeded, that came from the same therapeutic targets?
 
KZ: Not to my knowledge.    
 
JX: That worries me, because you can have a very sensitive technique, but not very specific.
 
KZ: I think that is a concern of the group. Again, as you might imagine, gathering 15 different pharmaceutical organizations to make a decision is more difficult than to make [a decision at] one pharma, so I think they chose to focus early on with the knowledge that they’re going to get very [high] sensitivity, but an inability to get specificity. That they’re going to get a hit and they’re going to say, ‘This failed, and here are five possible things that we saw, and we won’t really know what’s specific about this until we go to 100 compounds.’
 
Next week, the roundtable participants discuss integrating human in vivo data into toxicology studies, the pros and cons of pathway analysis, and the need for pharma to publish more negative results.

Filed under

The Scan

Not as High as Hoped

The Associated Press says initial results from a trial of CureVac's SARS-CoV-2 vaccine suggests low effectiveness in preventing COVID-19.

Finding Freshwater DNA

A new research project plans to use eDNA sampling to analyze freshwater rivers across the world, the Guardian reports.

Rise in Payments

Kaiser Health News investigates the rise of payments made by medical device companies to surgeons that could be in violation of anti-kickback laws.

Nature Papers Present Ginkgo Biloba Genome Assembly, Collection of Polygenic Indexes, More

In Nature this week: a nearly complete Ginkgo biloba genome assembly, polygenic indexes for dozens of phenotypes, and more.