A new computational toxicology initiative underway at the US Environmental Protection Agency signals the official entry of the EPA’s Office of Research & Development (ORD) — as well as its regulatory arm — into the -omics era.
The EPA may be a few years behind its federally funded peers, such as DOE, NIH, and FDA, but it doesn’t plan to waste much time catching up, according to Robert Kavlock, director of the reproductive toxicology division at the EPA’s National Health and Environmental Effects Research Laboratory and chair of the computational toxicology implementation team. The Food Quality Protection Act, which Congress passed in 1996, imposed stricter guidelines for pesticide screening that will ultimately lead to an increase in animal testing. In order to reduce the costs of those additional tests, the EPA is turning to computational biology and genomics “to see whether we can make predictive models for what happens in animal studies,” Kavlock said.
According to the draft framework document, the computational toxicology research program has three goals: “1) improved linkages across the source-to-outcome continuum, 2) approaches for prioritizing chemicals for subsequent screening and testing, and 3) better methods and predictive models for quantitative risk assessment.”
Kavlock described the first goal as “building the toolbox,” the second as “a very fundamental application,” and the third as “the holy grail of the EPA: Can we figure out which chemicals are going to affect which people — or which animals in the environment — at which doses?”
While some at the EPA envision a day when in silico models will be able to replace all animal testing, Kavlock is more cautious. “I don’t see that happening overnight,” he said. “I don’t think there are any doubts that it’s the way to go, but there’s going to be a lot of false starts before we get this right.”
With an expected total budget of between $10 million and $20 million for FY 2004, and less than $10 million of that available for purchasing new equipment, the program will have an uphill battle in bringing what Kavlock described as an “extremely minimal” computational biology infrastructure up to speed. By comparison, the National Human Genome Research Institute’s budget for bioinformatics and computational biology in FY 2002 (the last year for which data is available) was more than $35 million, and the National Cancer Institute has requested $88 million for bioinformatics in 2004. But Kavlock said the EPA is planning to rely heavily on inter-agency partnerships to help leverage its abilities. The agency is in discussions with the Department of Energy’s Joint Genome Institute as well as the National Institute for Environmental Health Sciences for possible partnership opportunities. It also plans to support academic partners through its external grants program and seek out partners in industry. “We don’t think we’re going to be able to do this alone; we don’t have the resources,” Kavlock said. “We may know what the problems are, but we may not know what the solutions are, and we certainly don’t have enough money to do it. So it’s going to require that we work with other organizations.”
Small Steps toward Systems Biology
Kavlock said his reproductive toxicology group began looking into genomics technologies as a means to study birth defects and reproductive function about five years ago, meaning it was “a little bit ahead of the curve,” compared to other groups at the EPA. The push toward an agency-wide genomics effort didn’t begin until the spring of 2002, when Paul Gilman left his post as director of policy planning at Celera Genomics to become EPA’s assistant administrator for research and development. “It was really under [Gilman’s] leadership that this computational toxicology issue became a lot more prominent at [the] EPA,” Kavlock said.
The effort is still in its preliminary stages. In July, Kavlock and several EPA colleagues put together a draft “framework” document that outlined the goals of the initiative (available at http://www.epa.gov/nheerl/comptoxframework/comptoxframeworkfinaldraft7_17_03.pdf), and the group formally introduced the framework to the rest of the EPA’s ORD at a workshop in September. The next phase of the project was slated to kick off last Friday with the first meeting of the implementation team, Kavlock said.
The program has already set aside a total of $2.4 million to fund three to five awards under a program to develop systems biology models of the endocrine system. An RFA for the program (available at http://es.epa.gov/ncer/rfa/current/2003_comptox.html) was issued in August, and proposals are due by Jan. 21. Kavlock said he expects additional grants of similar size to be awarded in bioinformatics for 2004, “because inside the EPA one of our big bottlenecks is going to be bioinformatics. We can produce data, but we’re easily going to get swallowed by it.”
The data is already on its way. So far, the EPA has acquired three Affymetrix systems for microarray studies in human, rat, and mouse, and is also spotting its own arrays. In addition, an EPA lab in Athens, Ga., plans to purchase a 600 MHz wide-bore NMR for metabonomics studies. Through its partnership with JGI, the EPA also expects to get its hands on the complete genome sequence of the fathead minnow — “the white lab rat of the fish world,” according to Kavlock.
Not your Pharma’s Informatics
While many of the informatics tools for environmental research will be similar to those used for human health research, there are a few significant challenges that the EPA will have to address. For example, Kavlock said, the agency’s mandate to study wildlife will require sequence data for fish and frog species that aren’t priorities for the NHGRI. In addition, the universe of chemical structures that pharmaceutical companies study is only a fraction of those under the EPA’s purview. “We’re not focused on some tight chemical structure that we’re trying to tweak around a little bit and see if we can make it more efficacious or less toxic, but we’re dealing with 80,000 chemicals that are in commerce, and they’re all different structures,” Kavlock said. In addition, he noted, while the pharmaceutical industry focuses on very active chemicals, “we tend to see chemicals that are not going to be that active in terms of potency, but there may be millions of people exposed to them in the drinking water, so there could be low-level exposure to a lot of people” — a situation that requires a much better understanding of the dose-response relationship, he said.
ORD’s work will ultimately feed into the EPA’s regulatory arm, which makes up the bulk of the agency. Currently, the EPA follows an “interim policy” on genomics that it issued in 2002 (available at http://www.epa.gov/osp/spc/genomics.pdf), which states that, “While genomic data may be considered in decision-making at this time, these data alone are insufficient as a basis for decisions. For assessment purposes, the EPA will consider genomics information on a case-by-case basis.”
As it comes to terms with genomics data over the next few months and years, the EPA will be following in the footsteps of the mother of all regulatory agencies, the FDA. The EPA has an internal genomics task force that is currently preparing a white paper on genomics and risk assessment that is “not totally different from what FDA has been doing with its recent [pharmacogenomics] guidance document,” Kavlock said. “I think FDA is quite a bit farther ahead because the pharmaceutical industry has been so invested in this. Hopefully,” he added, “we’ll learn by everything they’re doing right and wrong.”