![]() |
Dale Johnson, President & CEO, Emiliem
|
Dale Johnson has worked in the field of pharmaceutical toxicology for more than 25 years, most recently serving as vice president of drug assessment and development at Chiron. Johnson left Chiron to found Emiliem, a drug toxicity startup based in the San Francisco Bay area, around six months ago. He is also an adjunct professor in molecular toxicology at the University of California, Berkeley, where he teaches computational toxicology.
Johnson recently co-authored an opinion paper in Current Opinion in Drug Discovery & Development [2006 9(1):29-37] noting that despite the potential for computational tools to predict toxicity, the impact of this technology has so far been "modest and relatively narrow in scope."
Nevertheless, Johnson identified a number of promising directions for the field in the paper, including pathway analysis and other computational systems biology approaches. BioInform spoke to Johnson by phone to learn more about where he thinks the discipline is heading.
Your article mentions that computational toxicology hasn't really lived up to its promise. How would you characterize the current state of the art of these tools?
Currently what you see are computational tools that incorporate certain types of databases, and then there are different types of algorithms associated with those computational tools, and they could be chemical structure fragment-based, or statistical, where there is some QSAR [quantitative structure activity relationship] analysis built within statistical programs in the software …
Most of the programs have been used more extensively within industry for mutagenicity and carcinogenicity predictions, and there are good reasons for that, and one is that the endpoints are really well established, so you can look at things across different studies and combine them together. So the issue then, in certain types of tools, is the chemical space — what you're adding in there and predicting from. Certain types of commercial applications, where the databases come with the application, may or may not have direct utility within a pharmaceutical company because the chemical space is different. And that's usually the case. If somebody is working on an analog series around a certain target, that type of chemistry won't be present in commercial databases. So the industry [research] group has to add that into the database — collect information and so forth, but of course, they're dealing with a series of compounds where they don't have data yet, so that becomes the issue.
So then what typically happens is if this is a new analog series, what you actually look for, then, is some kind of a screening assay that becomes relevant for whatever you're trying to predict, and then you use the standard SAR and QSAR types of analysis, and more likely SAR visualization tools to rapidly look at how that particular effect may be associated with chemical structure. So this is kind of the way it has evolved within the pharmaceutical industry, which is not too satisfying from a pharmaceutical R&D standpoint. It is a relatively large task to try and think about that because ultimately, you're stuck with the problem of are you predicting for animal studies or for humans? Even the commercial applications — they're keyed on predicting what kind of animal toxicity you might get, or what kind of in vitro toxicity, and what we're really after — and where all the interest is from a drug development standpoint — is what could possibly happen in humans, and how do you get to that point?
What would you consider to be some promising efforts underway that are getting the industry to that point?
Probably the most promising thing that's occurred over the last five years is that we have a better way of defining a toxicity endpoint. We know that toxicity occurs in a dynamic fashion, so you can't just look at it as a single time point, a snapshot in time. It occurs over a temporal aspect. So our ability now to be able to look at that through various types of technologies has really increased dramatically.
You can look at any type of genomic technology, or even high-content screening, or, for instance, toxicogenomics where there's a good, solid temporal aspect of an actual study design.
Now we're able to look at endpoints in a much different aspect. So I think that, plus a much better understanding of metabolism, transporters, and the variability that exists within those — polymorphisms, ethnic differences and so forth — give us now a tool to expand our knowledge of what's actually going to happen to a drug when it gets into a human.
That's been established quite nicely for animals. So we can at this stage look at and make a much better guess on what a relevant animal model is as we look at toxicity — rather than simply looking at normal animals in classic types of strains. Now we can look at disease animal models or animals that have certain pathways present. Or even mutations or certain types of polymorphisms.
Of course, that puts pressure on how you actually get that information into computational tools, and probably the most important aspect of that is now being developed with what you could call either systems biology or pathway analysis types of software and programs.
The ability to actually look at connecting pathways and dynamic events that occur and being able to predict those — we're right at that stage now, so probably the most important thing that can happen is our ability then to create better types of understanding of endpoints to be able to get those into key pathways, both within metabolism, distribution, plus disease risk factors, I think is actually going to be the key to understanding this.
There does seem to be more overlap recently between traditional computational toxicology methods and bioinformatics tools — especially pathway and simulation technologies. What are you seeing in terms of usage, though? Are people sticking to what they're used to, like SAR-based methods, or are they trying out these newer approaches?
If you follow the entry of these into industry, there is much new interest in using pathway tools. … We see that, we see people tagging into those tools now very nicely — particularly from a metabolism standpoint. That seems to have been flourishing for a number of years now because the endpoint there, from a metabolism standpoint — and this is probably one of the big differences here — the endpoint is actually the chemical itself. So there's a good way to simulate that, there's a good way to look at the end result analysis — it's actually analytical data that can be derived from various sources, so you can see exactly what's going on, and you're looking at the transformation and movement of the chemical itself.
When you get to the biology side, which is the toxicity, that becomes much more complex, and I think that's why it's lagged behind the metabolism part.
So I think the understanding of being able to predict how a drug will be metabolized, how it moves through the body, where it actually sequesters in certain areas, what happens in those interactions, and being able to predict that through software approaches — that gives you a huge advantage to simulate what's going to go on from a toxicity standpoint.
You mentioned one promising sign being all the new genomics and toxicogenomics and screening data that's coming online. What kind of impact do you see from some of the public efforts in this area, like the NIH Molecular Libraries Roadmap initiative?
A huge impact. The major source of information right now is in the public domain, and it will be. So those people who can utilize that data in various ways have a huge head start over everybody else. It's just amazing what is in the public domain at this stage and what's going to be there over time. So in many ways, one does not have to think specifically of the compound — and I think this is what has always slowed down the process from a toxicology standpoint — you always think that what you have to know is the effects of the compound that you're working with, but new sources in the public domain allow you to probably know more about that compound than you can possibly imagine, because it's all involved in interconnecting pathways, and the dynamics of those pathways under certain situations.
The connectivity will be the key to it in the future.
You mention in the paper that the EPA is ramping up its efforts in this area, and that the FDA has a well-established computational toxicology program. Are these efforts helping advance the field?
The way it actually advances the field is that it puts everybody on notice that the major part of risk assessment can be done through various types of simulation. And risk assessment from an EPA standpoint is relatively complex. From an FDA standpoint, it promotes and moves all the other attempts forward. Even if people don't agree exactly with what's happening at certain agencies … the groups in there are extremely sincere and very talented, so what you see is a huge effort to try and figure out some of these major issues. So if you watch that very carefully, and if you watch what's happened in the EPA over time, it's pretty phenomenal — the various approaches and the types of QSAR analyses that have been developed, and even the physiological-based pharmacokinetic modeling. Everybody kind of feels that's a very specific thing for environmental chemicals and so forth, but it probably has the promise — if one could figure out how to do it rapidly and move in a certain direction — it probably has great promise in the pharmaceutical industry.
I was always myself interested in whether or not you could predict where a monoclonal antibody was going to go and be associated with a target and do it through some sort of PK/PD modeling. That hasn't been done yet, but it certainly is feasible.
What criteria are you and others using to gauge the success of these tools going forward? Say there is a decrease in adverse drug reactions — would you even be able to link that directly back to the use of these tools? Or are there other criteria that you're keeping an eye on in order to track this field?
What you'd like to be able to do is use a couple things, and [that is], 'Can you affect the attrition rate and [see a] decrease in ADRs?' Whether that's possible is another story — it's very complex and it may not be possible. So there may have to be other ways.
Probably the best way is to do it on a drug-by-drug basis. When you're actually starting from a clinical trial standpoint, to really be able to identify certain patient populations that would be more susceptible — and that's susceptible both from an efficacy and a toxicity standpoint — and be able to move not only cancer drugs, which we do right now, but find those very specific patient populations that are susceptible, and run clinical trials in those patients, and really get answers quite early. If, in fact, we can move the toxicity part in the same way that the efficacy part is moving — and there's no reason why it can't be done, quite frankly — then you're at a stage where you can actually identify in early clinical trials, and even through some pharmacogenetic-type trials, identify those patients that are going to be more susceptible. And I think that's where you're going to find the victory, and the metrics for what's happening in this field.
And that part will come from a complex look at a series of pathways and understanding what's going on from a disease standpoint, as well as from a chemical structure standpoint.
How far off would you say something like that is?
I don't think it's that far off; I think it's within the next five years. You'll see it probably in the cancer area first, and the reason is that field is already primed for a translational medicine approach, a personalized medicine approach. That's the direction it's going. So I think there's no barrier at all to introducing this type of concept from a cancer standpoint. There's no barrier, but it's complex to actually get there.
And then as more areas get into this type of clinical trial approach — as this gets out of cancer and moves into cardiovascular, moves into some other things — then what you're going to see is a demand that you actually can identify the safety aspects early. And that's probably what's going to drive it. It's not just the fact that the tools have to work; it's going to be driven by the reality of having to run clinical trials in a certain way.
Are there any other issues or trends in this area that you're keeping an eye on?
Well, the students. At UC Berkeley, we sat down maybe four or five years ago, and what we were looking at was if you start a molecular toxicology curriculum — and this is both undergraduate and graduate — you're really after some key areas that you know are going to be important in the future. That's why we decided to create a computational toxicology portion of the curriculum. So now it is actually a required course in the molecular tox curriculum. Students are being trained in various ways, and what I've been pretty successful in now is getting those students involved in internships from an undergraduate level. We have people who have and will have internships at FDA and various companies — some of them computational tox tool companies, others at biotech companies. So I think that's another driver — you start from the academic side. Because certainly systems biology is just taking off on the academic side, and the key is to actually be able to incorporate this process into that, and this will also drive the field.