Skip to main content
Premium Trial:

Request an Annual Quote

Mocking PGx Data Submissions With Expression Analysis Steve McPhail

Premium

At A Glance

Name: Steve McPhail

Title: President and CEO, Expression Analysis

Background: Expression Analysis, Inc. — 2002-Present; ArgoMed, Inc. — 2001-2002; Xanthon — 2000-2001; TriPath Imaging, Inc. — 1997-1999; Dynex Technologies — 1992-1997; Abbott Laboratories Diagnostics Division — 1981-1991

Education: B.S. degree in biology from San Diego State University

On the second day of the San Diego Clinical Genomics conference last week, Steve McPhail, president and chief executive officer of Expression Analysis, spoke about the mock pharmacogenomic data submission his company and Schering-Plough Research Institute carried out with the US Food and Drug Administration, a process meant to prepare the agency for the any-day-now release of its final guidelines.

When the conference was over, Pharmacogenomics Reporter had a conversation with McPhail about the mock submission and what the companies and the FDA had learned from it.

How did the mock pharmacogenomics data submissions come about?

It occurred in 2003, we initiated conversation with the Food and Drug Administration in February of 2003, at the Society of Toxicology meeting.We were approached by Frank Sistare who was with the FDA at that time — he has moved onto greener pastures since then. But he approached us because he recognized that we accepted samples from a wide variety of clients and we reported data out to a wide variety of clients. And they were very interested in understanding, specifically, data issues around microarrays.

So they approached you?

Yeah. And they introduced us to the Schering-Plough Research Institute. At the time, he had been working with the [FDA] pharmacogenomics working group , and I think they felt that keeping our collaboration to a few individuals might lead to more understanding, [and] more work getting done in a relatively short period of time.

Had you worked with Schering-Plough before?

Well, Schering-Plough had been one of our clients before, but Schering-Plough Research Institute had done some work on a drug that they had decided to not take to market. And so Schering-Plough Research Institute decided to collect the data that was generated in the toxicological evaluation of this drug — the Affymetrix data, but also the clinical chemistry data, the pathology data, the phenotypic data, and then we molded that into the mock pharmacogenomics data submission.

So the FDA wanted to go through a trial run?

They wanted to have someone make an electronic submission of microarray data to them, so that they could begin to put in place the necessary protocols to begin to deal with that type of data.

The FDA is very used to dealing with laboratory data. Eighty percent of an NDA is laboratory data. They’re very used to dealing with that, but they’re not very used to dealing with large datasets like that. So this was an opportunity for us to help them understand how this data might be utilized in a regulatory submission.

You mentioned in your talk that there was some trouble with the XML format of the data.

It’s clear that the agency is moving forward with XML data. It’s great for them to keep complex data on, but at the time that we made the initial submission, I think only the database folks could read the datafile — the reviewers couldn’t. They didn’t have access to it. But I think it’s come a long way since the submission.

So that was the common format you mentioned in your talk. “Context” and “content” were the other two things you said the FDA wanted to figure out.

Right. They were interested in, ‘What should the content of a microarray data submission look like?’ We basically put it laboratory infrastructure, we included data on study-specific processing controls, we also included data on statistical analysis determination. And when Schering-Plough came into the picture, they provided us with the toxicogenomic data interpretation.

What did you learn from the experience?

It was a great experience for us. We really began to get an understanding of what the FDA requirements would be in microarray data submissions. We build our business around supplying regulatory clients with microarray data, and that really helped us tremendously. We were able to work with the agency through several critical issues surrounding how the data should be presented. It was a tremendous experience.

I think that probably the biggest thing we did is that we helped to contribute to a manner in which this data can be used by the agency, and we have a better understanding of what that is.

What did the FDA learn from the process?

I think they learned a great deal from our expertise. Hopefully we taught them how to interpret this data in a regulatory submission. I think that there are a lot of people that have a lot of experience, or at least some experience, with microarray data, but not in the context of a regulatory submission.

So I think that we helped them understand how it might be used.

So did they go through the process and give you a mock recommendation on what to do next in the process?

No, they didn’t do that. But what they did do after our pilot submission in July — it was sort of our strawman — we threw it out and gave them a chance to look at it and come back to us with comments like, “Hey, you really need to include this,” and, “This doesn’t need to be here.”

So we got that feedback from them and worked on another submission for another 60 or 90 days, and then turned the submission in to them, and got great information back from them in terms of about 16 pages of questions.

We had a primary reviewer and a secondary reviewer, but there were about 30 to 40 individuals in different departments within the agency that had the opportunity to look at the information and ask questions: “What’s that for? Why did you do this?”

How large were the submissions? Was it a whole expression array?

From the files, they had access to all of the genes. In the toxicogenomic interpretation, which is what Schering-Plough Research Institute really did to add context to the data, we narrowed it significantly — to several dozen genes.

What were the Phase III cost issues that you mentioned?

If you’re talking about Phase III clinical trials, that was one of the challenges. They recognize that in order to implement this technology successfully and effectively in Phase III clinical trials in clinical diagnostics, the cost has to come way down from where it is.

To process an Affymetrix chip today is not an inexpensive task. The chip cost itself is quite expensive, the reagents and consumables are expensive, and of course you’ve got the labor associated with processing, you’ve got overhead. That makes this a pretty expensive proposition. From that point, you’re talking about Phase III clinical trials with 10,000 patients. When you’re running a gene expression microarray, it’s not going to have just a single timepoint — you’re going to take at least two timepoints, and most likely more than that. So if you start talking about three tests over 10,000 people, there are a lot of expenses that add up in a real big hurry.

Did you calculate a range of costs for that?

Yeah, it’s expensive. I can tell you where I think the cost needs to be in order to be implemented in Phase III clinical trials. It needs to be under $500, fully loaded — everything.

Per sample?

Per timepoint.

So, should companies narrow the amount of data they collect?

That would make the FDA nervous. They really want to collect all the data and really begin to mine all of the data.

But that’s one option — to narrow the amount of data. Another option is to do what Affymetrix is doing in building [densely packed arrays].

What were the lab performance standards?

What we did in that collaboration was really a predecessor to offering a microarray proficiency-testing program. Once the technology moves into a CLIA-regulated environment, [microarrays will be subject to new standards]. We thought that we could begin to lay the groundwork for microarray-based proficiency testing.

What we’re doing is assessing the range of performance characteristics in an Affymetrix microarray, where the same sample was tested in multiple labs at multiple timepoints. What we’ve seen thus far indicates better performance than we hoped for when we initiated this. We had no idea what to expect from the performance characteristics and variability associated with microarrays.

How do you quantify that?

We measured the data in many different manners. Three measurements we try and take are: looking at the quality control method generated in the Affymetrix system, we look at comparability, and we look at reproducibility.

Who else submitted?

I showed some of Millennium’s data today, Wyeth has also submitted data on voluntary genomic data submissions, and GlaxoSmithKline. They have submitted real data. Those are the few that I know of.

Filed under

The Scan

Support for Moderna Booster

An FDA advisory committee supports authorizing a booster for Moderna's SARS-CoV-2 vaccine, CNN reports.

Testing at UK Lab Suspended

SARS-CoV-2 testing at a UK lab has been suspended following a number of false negative results.

J&J CSO to Step Down

The Wall Street Journal reports that Paul Stoffels will be stepping down as chief scientific officer at Johnson & Johnson by the end of the year.

Science Papers Present Proteo-Genomic Map of Human Health, Brain Tumor Target, Tool to Infer CNVs

In Science this week: gene-protein-disease map, epigenomic and transcriptomic approach highlights potential therapeutic target for gliomas, and more