National Center for Toxicological Research
Name: William Slikker
Position: Incoming director, National Center for Toxicological Research
Education: University of California, Santa Barbara, BA, 1972, biology; MA, 1972, biological sciences
University of California, Davis, PhD, 1978, pharmacology and toxicology
Experience: Director, Division of Neurotoxicology, NCTR/FDA, 1993-present.
Acting Director, Division of Neurotoxicology, NCTR/FDA, 1992-1993.
Acting Director, Division of Reproductive and Developmental Toxicology, NCTR/FDA, 1990-1991.
Chief, Pharmacodynamics Branch, Division of Reproductive and Developmental Toxicology, NCTR/FDA, 1980-1990.
William Slikker began his new position as director of the FDA's National Center for Toxicological Research on Jan. 3. Slikker, who replaces Daniel Casiano, will be responsible for several projects that aim to help industry, academia, and government better understand some of the practical challenges of pharmacogenomics.
Pharmacogenomics Reporter caught up with Slikker recently to discuss how the healthcare industry, academia, and the FDA and other government agencies can partner to help ensure that pharmacogenomics technologies continue to evolve and become more broadly applicable.
Are you going to be doing anything different from Dr. Casiano?
Dan worked diligently for preparing the center for the goals of … moving toward … genomics, proteomics, and metabolomics. And we're going to continue that work in moving forward with the -omics to support regulatory decisions within the FDA.
Can you tell me specifically what you plan to do, especially as it relates to pharmacogenomics?
Certainly, pharmacogenomics is a big portion of where the FDA is moving, in terms of the Critical Path [initiative]; the idea is to improve the amount of information that could be generated so we can move this process forward from initial discovery of a chemical agent to its applications to improve human health. And one of those ways is to improve the pharmacogenomics aspect and to do that with the modern -omics tools in particular.
Are you tooling up your lab?
What we're doing is building an … approach where we're applying systems biology to trying to solve problems within this area of increased technology. We're doing this by integrating individuals to work together in teams to apply not only the gene-expression assays, but also the proteomics and metabolomics. [We're looking to integrate] those with bioinformatics so you have products that can push forward approval of drugs and other chemical agents.
Is there a class of drugs that can benefit specifically from this push to implement these technologies?
I think generally what we're talking about is improving the process and improving the kinds of approaches that are used, and we are trying to develop this as more of a team [to create] seamless operation so you can integrate information about gene expression, the proteomics [tools and data] that are available, as well as certain biomarkers … using metabolomics so these can be integrated in a total package.
What's the next goal post here?
We're trying to move forward with the idea to improve and understand the quality of array data. The MACQ [microarray quality control study] is a way in which this kind of platform validation and standardization can be met. This project is led by Leming Shi at NCTR in conjunction with other centers, especially CDER, platform providers, and academic folks. This project is going forward to provide validation and quality-assurance of genomic data.
Along with that, we have ArrayTrack [a database that was also developed at NCTR], which is a tool that can be used to acquire and accumulate genomic information, but can also analyze and interpret those data. That tool is being used by many different portions of the federal government and industry.
These are two milestones that are becoming available. Now we want to expand the use of ArrayTrack not only for genomic data but also for metabolomic and proteomic data.
There are still voices out there that say that microarrays are not really suited to perform as great diagnostic tools because they are not comparable between platforms. Are you finding that to be the case?
That's exactly what the [MAQC] study is trying to understand. And what's impressive about it is that we have a great number of participants from most of the platform providers and many other individuals who use these tools within government and academia. We aim to ask and answer: 'What is the comparability? How can these tools be used in a way in which they will produce very constructive information?' And I think right now the results look very favorable.
What happens after that? How do you disseminate the information, and what happens after that?
[We need to try to] generate the manuscripts so it can reach the public. From there, we're hoping that the data will be very convincing on how to use these platforms successfully and how to analyze the data so you can compare across the different platforms that are available.
Additionally, having the standard RNA sets to allow comparability between labs is important.
Is there another aspect of pharmacogenomics, genomics, and proteomics that I'm leaving out?
I think the thing is that these are important tools, and the idea is to use them to move personalized medicine forward. We know that we need to have this information from the pharmacogenomics and from the various modern tools so we can provide the … successful application of personalized medicine, and that's the direction the whole nation would like to move in, is making sure we're getting the best medicine to the individual.