Skip to main content
Premium Trial:

Request an Annual Quote

John Leighton, Co-chair of CDER s Nonclinical PGx Subcommittee

Premium

At A Glance

Name: John Leighton

Title: Supervisory Pharmacologist, FDA CDER Division of Oncology Drug Products; Co-chair of CDER's Nonclinical Pharmacogenomics Subcommittee

Background: Reviewer, FDA Center for Veterinary Medicine

Education: PhD from the Department of Physiology and Biophysics at the University of Illinois, Urbana-Champaign; Postdoctoral training at the University of Colorado Health Sciences Center in Denver

This week, the FDA released a Manual of Policies and Procedures governing the ever-developing role of the Nonclinical Pharmacogenomics Subcommittee, which was created in May 2002, and is overseen by the Center for Drug Evaluation and Research Pharmacology and Toxicology Coordinating Committee.

The subcommittee is charged with disseminating guidance on pharmacogenomics submissions to both CDER reviewers and the pharmaceutical industry, according to the new MAPP. It serves as a resource to the PTCC and CDERon scientific and regulatory aspects of emerging technical problems.

Pharmacogenomics Reporter caught up with subcommittee co-chair John Leighton, supervisory pharmacologist at the CDER Division of Oncology Drug Products, to get a closer look at this group.

Can you explain the typical submissions the Nonclinical Pharmacogenomics Subcommittee handles?

The committee started a couple of years ago with the understanding that this is a new field, and we’d better figure out what’s going on in it, because we may or may not see submissions in this field. This was like 2002.

So, we formed the committee, and began working with these companies.

So, what would a nonclinical submission be? In many ways they’re like the newer term — the voluntary submissions. The voluntary submissions would cover both nonclinical and clinical.

We intend to work with the [Interdisciplinary Pharmacogenomic Review Group] whenever possible in order to provide our perspective from a pharmacology/toxicology perspective as to what we need from our review discipline, in order for these submissions to work for us.

Can you cite an example?

We’ve had three different kinds, and they all have a slightly different flavor. We gained experience from large datasets, stand-alone studies, as well as class effects of drugs.

So, we got an idea from the submissions that we’ve seen so far, and as reported in [this abstract] as to what are some of the different kinds of submissions that we might expect in order to support nonclinical findings.

I understand that the new Manual of Policy and Procedures puts into print what you’ve already been doing. Can you tell me anything that is new or unique in this document?

It’s an evolving process, and I hope that the MAPP makes it clear that it’s evolving, because we’re still learning about the best way and format that companies will want to use genomic data to support an application.

It’s great for data mining and drug-discovery-type purposes, but that’s not an area that’s traditionally been of major importance to us. We usually come in a little later in the process.

I break it down in terms of drug discovery and drug development. Data mining I see as very much a flexible process, where you’re looking to try to answer questions. By the time you get a little later in the process, you’ve had a lot of that basic work done. You know what your targets are going to be — or at least you think you know what they are. And that means that you move it into the GLP environment. That has changed what kind of things people have done in the past, because there’s some reluctance to take samples out and do exploratory studies in a GLP environment.

The new MAPP says that your subcommittee will “seek to develop guidance, address emerging technical problems” — are you able to share any of this?

We’ve contributed to the genomics guidance that’s coming out, and we have a reservoir of expertise and one possibility may be to tap into this expertise — I’m speculating here, maybe I shouldn’t do that — for these voluntary submissions that come through the pipeline, and because we have a core group of people who are interested in the field — interested in the technology — who we meet on a monthly basis to discuss the issues relevant to standard-setting.

The nice thing about the committee is that it brings in both the regulatory, as well as the laboratory-based people, and that’s described in the MAPP. So, we get some practical hands-on expertise, as well as expertise from the people who are actually reviewing documents, and that has worked out very well in our committee, in terms of a nice mix. Because you get the reality check, versus what’s possible.

So your subcommittee will be looking at some of the voluntary pharmacogenomics information that will be submitted?

I can’t commit to that absolutely, because those MAPPs [that will accompany the final pharmacogenomics data submissions guidelines] haven’t been finalized in my thinking. So, since they haven’t been released, I hesitate to talk about documents that are not fully public — you know, things can always change.

But we intend to continue to look at this and develop review standards for looking at nonclinical data, and we want to do this in cooperation with other government agencies. We don’t want to look at this in a vacuum — we want to be broad, and get in various points of view to be sure what does come out, finally, represents a diverse set of opinions and expertise.

Ultimately, we really don’t want to come out with one set of review standards for, say, a clinical study — because if it’s a whole-scale genomic array, you don’t want to have one set of standards for a clinical array for a tumor sample, for example, versus a rat liver. It doesn’t really make any sense, because the data quality should be the same regardless.

What this MAPP hopefully does is provide the framework where this committee can work with other groups to come up with a common set of standards, and I think the IPRG will clearly be one of the leaders in that.

What are some committees or groups that you work with most often?

Well, I work with the IPRG, and that’s being led by Felix Frueh. And I do a bunch of things that have nothing to do with genomics.

I do some stuff with the [International Life Sciences Institute] groups. They’re doing an awful lot in genomics — they’re interacting with the FDA, but the [US Environmental Protection Agency is also involved in ILSI.

The point is that we’re talking with our counterparts across the government in order to think about how data should be submitted and analyzed and examined from a review perspective.

One of the things that we did internally was organize a scientific workshop — a series of seminars. We brought in a number of people, a few from academia, some from industry. We also got the perspective from the ILSI group. In order to help continue the communication on this topic so that as many people get to hear about it as possible.

The MAPP document also mentioned that your subcommittee would “recommend standards for the submission and review of nonclinical PGx data sets from sponsors,” I was wondering if any of those had been finalized yet.

No. When you start talking about cross-platform issues — when you change the filters on your statistical parameters and you come up with a different list of genes, then it begins to become a very, very difficult concept.

What seems to be working out is some of the technical issues related to the quality of the data. I think that, pretty much, people have agreed on what that takes.

But then how do you report the data, how do you warehouse the data?

Some groups are looking at the public databases like [those at] the [European Bioinformatics Institute], and then the [US National Institute of Environmental Health Sciences] has a public repository they’re trying to set up. It becomes very difficult to get the data from all these different platforms into a format that’s actually useful for a database.

We’re actually listening — we’re not taking a lead on any of that. But we’re listening to what they have to say in these issues — because they’re developing the databases themselves. They have the expertise in it, and there are a lot of issues that need to be worked out before it’s really useful from a large-scale database development [perspective]. It’s just not there yet.

Any idea when standards will be recommended? It seems like it would be hard to tell, since you haven’t been there before.

I’m speaking from my own perspective as a member of this committee and what I know inside the agency.

Two ways to do this are — start with the technology and say, “OK, this is where the technology is useful, and that we’ll develop our review standards from there.” Or you can turn around and say, “OK this is what we need to review an application, and then you should develop standards based on that.” So it depends which way you’re going to come from.

What I’m thinking about is, “How much data is sufficient to support a finding?” Do you need to submit all the gene changes that were looked at? All the genes on a chip file? Do you have to take a step back and report just filtered, normalized data? Can you take a step back and just report all the gene changes? Or finally, can you just report the gene changes for the pathways that are critical?

So what ultimately ends up as a standard at the FDA will maybe drive what standards are needed to be developed. What question you ask [will cause you to] develop a different set of criteria regarding the quality of that data, in terms of whether or not you need to report, say, a standard statistical package. Well, if you only need to report gene changes in the pathway, a standard statistical package may not be needed.

Filed under