Skip to main content
Premium Trial:

Request an Annual Quote

Roundtable: Systems Tox Faces Obstacles in Species Barrier, Sparse Pathway Databases

Premium
Part two in a two-part series.
 
BOSTON — Two weeks ago, during Cambridge Healthtech Institute’s Bio-IT World conference, BioInform sat down with representatives from two pharmaceutical firms and an informatics vendor to discuss current trends in systems toxicology.
 
Last week, the participants discussed the potential of omics technologies to help identify toxicity in compounds that make it through traditional toxicity screens — as well as the informatics challenges associated with that approach. 
 
This week, the discussion turns to integrating human in vitro data into toxicology research, the pros and cons of pathway analysis, and the need for pharma to publish more of these kinds of studies.
 
Participants:
 
Maryann Whitley (MW): associate director of bioinformatics, Wyeth Research
 
Jim Xu (JX): associate research fellow, Pfizer Global Research and Development
 
Kurt Zingler (KZ): head of US business, Genedata
 

 
BioInform: What are some outstanding challenges in predictive toxicity?
 
MW: The species barrier is [one] issue. We can build models that will predict toxicity in the rat, but that doesn’t tell us anything about what we’re going to see in human.
 
JX: What about human in vitro systems, and using all the systems toxicology tools there to try and bridge that? So you have the rodent in vitro and in vivo and then the human in vitro, and then maybe if the same pathways are affected in human in vitro and rodent in vitro, you can say something about human in vivo. Has that been taken up?
 
MW: I know a couple of years ago, we were working with primary human hepatocytes, and they just proved to be extremely difficult to work with. I am not aware right now that we have any human in vitro systems up and running.
 
One way to go at it was actually the Gene Logic tox consortium. That was human hepatocytes, and Wyeth was one of the partners involved in … that consortium. But to my knowledge, it was technically very difficult. Just the primary cultures were very difficult. So the next stage you’d have to go through would be some sort of established line and then you get into all kinds of [issues].
 
[When it comes to the] predictive capabilities out of established lines, I think all bets are off.
 
JX: We actually worked out a way to culture human hepatocytes in 96-well plates, and we surveyed 350 drugs, and we are coming up with predictions that are complementary to the in vivo animal models. So adding them all together, we are actually reaching somewhere around 75 percent sensitivity with very high specificity. That’s very promising, and we are still looking for ways to improve our combined use of in vitro and in vivo.
 
MW: So do the hepatocytes pick up things that the rat models miss?
 
JX: Yes. They pick up some of the idiosyncratic toxicants that are only toxic in humans, but not in rodents.
 
MW: Do you see convergence in some of the pathways between the rat models and the human, or even dog?
 
JX: It just so happens that human hepatocytes are more sensitive to these agents than the animal hepatocytes.
 
MW: Oh, so it’s a dosing effect.   
 
JX: Yes. It could be the intracellular drug concentration, et cetera, et cetera. So there are a whole variety of reasons.
 
One advantage of this in vitro model is once you establish a technology that can work with a 96-well plate, now you can actually survey a large panel of drugs and analogs and pharmacologically inactive analogs, and that allows you to design your experiment with the proper statistical rigor.
 
MW: And it requires very little drug, too, so you can do it way before you have enough compound to go into the rat models.
 
JX: That’s right. And also, you can position it before you go into any in vivo tests at all. So now if we say something has a potential signal, and even though we don’t have proof yet, these things are positioned early enough that there are still workarounds at that point.
 
BioInform: Jim and Maryann, you both mentioned pathways and getting a better understanding of the mechanism of action of toxicity. How far along would you say that part of the equation is in building these systems?
 
JX: We’re not quite there yet. We’re far away from there. Some of the problem is the ontologies. The Gene Ontology says this gene does this, but maybe the gene does a whole lot of other things that we don’t know. And whether the gene does that in the human hepatocyte as opposed to another [cell type] — say, human chondrocyte — that has yet to be defined.
 
That said, the hope is that instead of looking at individual genes, because we have so many genes, and just because of noise you can have a false positive discovery rate, by looking at pathways, at least you have some way of converging the different signals to say, ‘OK, maybe the redox state of the cell is perturbed by this toxic agent,’ and that has been shown to be the case for some agents. And maybe with some more examples like that, you can start to look at your data that way, as opposed to just individual genes or signatures.
 
It’s not the top 50 genes that perturb the most [that we want], it’s really the most predictive. That is where we’re looking. Maybe you can see four or five different pathways that seem to be popping up again and again, and that would give you more confidence that you should watch out for those pathways.
 
We’re actively looking for companies who can provide those tools. Starting with better gene ontologies that are organ-specific, gene ontologies that link proteins together in an organ-specific way, and then we can visualize them by saying, ‘OK, all these drugs, blast them through the literature,’ so this is text-mining, literature mining, et cetera. And then at the end, it’s the biologist’s insight about whether this protein makes sense, and experience plays a lot into that as well.
 
MW: I think the pathway tools for in vivo work are very difficult, because in vivo, you’re dealing with secondary, tertiary, and even further downstream effects. If you can control it in an in vitro setting, I think you can make much more sense out of the pathway information available, and then try to extrapolate that back into your in vivo data.
 
But just starting with the in vivo data, even after just a single dose, if you’re 24 hours past that dose, you have no idea how many pathways have come and gone in that 24 hours, and you’re looking at bystander effects three or four pathways away from the primary insult.
 
At least in vitro, you have a little bit better way to design the experiment where you can actually follow those time changes, and you know you’ve got a pure set of cells and you’re not looking at all the different cell types in a human liver responding to many different pathways being triggered by whatever that primary insult was.
 
BioInform: Have any of you experienced or come across examples of any success stories in systems toxicology, or at least areas where there is some hope for these methods?   
 
MW: Why? Are we sounding negative? [Laughs] 
 
KZ: I think on an individual compound level, there are lots of little success stories of things that didn’t make it through, things that were changed, but we haven’t seen as much of what Jim mentioned, ‘Here’s a fully put-together program and going from A to B, our chances are 80 percent or 90 percent that this drug is non-toxic.’
 
There are just so many variables, and I think we’re trying to expand the picture in the way we’re using these technologies, but there are still species barriers, there are still genetic diversity barriers when you get into humans, and I think there’s always going to be that.
 
We’ll come to a point where, barring another $500 million, this is as good as the answer’s going to be. But it is getting better. I think that certainly the pharma companies we’ve dealt with and biotech as well have improved their rates going through with a few specific things, but I don’t think anyone’s happy with where they are right now.
 
JX: Some of the success stories are program-specific, where people can sort of put their finger on it and say, ‘I understand this biology and I perturb the system with pharmacologically active compounds, and here are my signaling pathways and here’s my response. And I can sort of put my hands around them and understand them, and my follow-up molecule does not have that, and now I can move forward.’
 
So some of the response is you have a signal already, and then you go back to try and construct this landscape and use that to predict. So I have yet to find a sort of omic signature that is universal. I think maybe that is just not there.
 
MW: I would agree. I think on a program-focused basis we’ve had some success stories. We’ve also had cases where, as I said earlier, it’s clearly toxic in the rat but the model says it’s fine. But we have had some success stories on a program-specific basis, then helping to identify a window of safety in the animal models. [For example,] in the animal model there’s toxicity at the highest dose that they dosed, but is there still a safety window that’s acceptable? [So you can take] that signature that was seen at the high dose, and [determine] how far back you have to dose before that signature basically goes back to baseline, and use that to say, ‘This is our safety window,’ and use that to move forward. But that’s very program specific.
 
KZ: I think what’s also become apparent is that as you divide up the toxicity into more homogeneous [diseases], I think there are some where some of our customers are very comfortable: ‘I can use microarrays and 90 percent of the time I can see this particular kind, but with this other kind, microarrays don’t help me at all.’ And some companies are even using some of those omics even when they find [toxicity] based on histopathology or enzyme tests.
 
So I think there are some areas where people are getting confidence in not just compounds, but [whether] you can go to your backup molecules or even sometimes go against what people see in conventional toxicology.   
 
BioInform: So it’s evolving to where people are getting comfortable with the strengths and weaknesses of the tools themselves and where they can best be applied?
 
KZ: A couple of people I’ve spoken with are saying, ‘In this particular kind of toxicity, I can use microarrays every time,’ and in others, they say, ‘There’s no use even trying microarrays because [they’ve] never shown anything.’
 
BioInform: Jim, you mentioned the concept of a universal omic signature. Is that even possible? Given unlimited funding and resources and the world’s biggest database, would that even be possible to identify?
 
JX: I think biologically that’s not there. It’s not possible, so it’s probably not worth the huge resources. I would rather focus on defining a specific phenotype for toxicity and power the experimental design that way.
 
It’s better to say, ‘I can understand this, but I can’t say too much about that.’ That’s fine, as long as you have confidence in something, even if it’s a small piece of the big picture, and conventional methodology doesn’t give you that, then that is value-added, where you just need to be very confident about that. Otherwise, you oversell this whole technology and that would be a dangerous situation.
 
BioInform: Short term, what’s on everybody’s wish lists, in terms of specific technologies or data that would make your lives easier? 
 
JX: I think as an industry that we ought to publish more. In the end, this is ultimately a pre-competitive area where people can publish their findings so we can all avoid toxicity. That would be my wish going forward.
 
KZ: Along the same lines is that we’ll begin to see some of these success stories. I hear a lot more about how things are working or not working than I ever see published. A lot of companies still [consider] this as part of their corporate IP: ‘We’ve identified this and it’s going to help shrink our pipeline or make it more focused,’ but I think there’s a lot of collective knowledge out there, so I think those success stories are probably what’s really needed at this point.
 
MW: I would agree, and I think that some of the data that you would also want to see are the cases where the compound clearly failed, and those are the hardest to publish. You would have to spend time now to publish on a program where the program may not be dead, but you’ve got a series of compounds that have clearly caused toxicity, and spending time to write that up and publish it and make the data available is not a high-value activity. It’s really hard to get people to do, to spend time to do that. They want to write the success paper — they don’t want to spend time writing the failure paper. But that’s part and parcel of building this public domain knowledge, I think, of compounds that do and do not show toxicity.

Filed under

The Scan

Shape of Them All

According to BBC News, researchers have developed a protein structure database that includes much of the human proteome.

For Flu and More

The Wall Street Journal reports that several vaccine developers are working on mRNA-based vaccines for influenza.

To Boost Women

China's Ministry of Science and Technology aims to boost the number of female researchers through a new policy, reports the South China Morning Post.

Science Papers Describe Approach to Predict Chemotherapeutic Response, Role of Transcriptional Noise

In Science this week: neural network to predict chemotherapeutic response in cancer patients, and more.