Fortunately, so is NCI’s budget, which tops $4.8 billion this year. With that money — and plenty from other funding agencies, organizations, and even individual donors — scientists are continually embarking on new research paths to understand and defuse this disease. In the following pages, we profile some of the leading scientists as well as the latest cancer initiatives that have started up in the past year.
Talk about eye strain. Imagine performing experiments with functional gene expression and then peering through a microscope to acquire and analyze tens of thousands of cellular images in a short amount of time. It’s just not feasible.
To do their work, scientists used the automated acquisition of microscopic cellular images and then applied sophisticated image analysis methods, taking a cue from facial recognition technology. “What facial recognition comes down to is taking pixels and figuring out the patterns in the pixels which will match a face to a person,” explains Sumit Chanda, a researcher at Novartis’ Genomics Institute. The researchers adapted this idea and designed an algorithm to recognize cells and identify those that have uncontrolled growth.
In a proof-of-principle experiment, they scanned approximately a third of the human genome to identify genes that cause significant increases in the growth of human cells, which may predispose them to becoming cancerous.
“It is really quantitatively impossible for a scientist to sit down and do this, so we thought this is something that could become a computerized process,” Chanda says. He says the technique, which can be used for various types of studies, is currently being used by his team to investigate genes that affect cell migration and differentiation.
The concept of a cancer genome project has been kicking around in the field for at least a couple of years now, but in mid-December the National Cancer Institute and National Human Genome Research Institute brought the chatter to a whole new level. Each institute has agreed to pay $50 million to cover the costs of a three-year pilot project that will officially begin what is being billed as The Cancer Genome Atlas.
It could conceivably take the entire three years dedicated to the pilot project just to get all of those elements working together properly. The NIH agencies are trying their best to fast-track the program; Barker says the goal is to review, select, and fund the tissue repository, sequencing centers, and genome characterization centers this year. Given the wide scope of the project and the challenge of starting so much of it from scratch, Barker says, “one of the things that we’ve been struggling with is how do we measure the success of this pilot.” Getting the infrastructure established will be one milestone; a second “is to be able to sequence cancer genes reproducibly,” she adds.
For the purposes of the pilot, scientists will try to keep the cancer biology itself as simple as possible. No specific type of cancer has been chosen yet, Barker says, but she and her colleagues will be on the lookout for tumors that are highly homogeneous and that come from cancers with very few subtypes. Barker expects the pilot centers to be looking in-depth at two particular cancers, though a few others may be studied peripherally.
Once the tumors have been selected, the genome characterization centers will use their technology arsenal to glean as much information as they can from them. Barker expects these tools to include array-based systems — for expression analysis and copy number changes, for instance — in addition to new technologies, such as epigenomics. “The fondest wish for this is [to] find genes of real meaning to cancer that could be used in the clinic,” Barker says.
All information coming out of the Atlas project will be served up using the caBIG platform, developed during the last few years by the eponymous NCI initiative.
The cancer genome project “captures the best of what we know about cancer biology and integrates it with the best of what we’re able to leverage with current genomic technologies,” Barker says. “It’s the right project at the right time.”
RNAi has certainly been the technology darling of the systems biology field for the last few years, but the tool still has one major hurdle to clear to prove that it’s different from its antisense predecessor: success in the clinic.
Alnylam, Sirna Therapeutics, and others are trying to rush RNAi-based therapeutics into clinical trials. Last year, SR Pharma acquired the RNAi company Atugen for upwards of £6 million to form a larger pharma with an established pipeline of RNAi-based targets.
The deal spurred SR Pharma onward, allowing the publicly traded company to raise £10 million. With that funding, says CSO Klaus Giese, the combined company is on track to deliver a product to the clinic in the first half of next year. Even before then, a product Atugen developed jointly with Quark Biotech is slated to reach the clinic sometime this year, he adds.
Delivery of these types of therapeutics has long been its Achilles’ heel. Giese says that Atugen’s siRNA is chemically modified and stabilized in a way that trumps competitors’ siRNA. “By using only naturally occurring building blocks” in the siRNA, Giese says, Atugen “has a huge advantage in terms of toxicology. … Even if the molecule breaks down, which it will, the cell can metabolize it safely.” The molecule is also protected against nuclease degradation, and on the more practical side Atugen’s proprietary siRNAs allow for lower doses and less frequent injections, he adds.
With Atugen’s technology at the ready, SR Pharma’s first products will target pancreatic and liver cancer, Giese says. Early experiments have shown repeatedly “that we can shrink tumors and, more importantly, we can prevent the establishment of metastases,” he says.
Years ago, Atugen began as a pure target validation company, according to Giese. “When we realized that we had a very powerful technology we immediately started our own gene discovery and target validation [effort],” he says. Because of that early research, the company now has three RNAi-based targets in its pipeline in addition to the one that’s scheduled to enter the clinic next year.
If Jarrod Marto had any thoughts about a Christmas bonus, this probably wasn’t what he had in mind. Marto, who was recruited to the Dana-Farber Cancer Institute in 2004 to help the organization beef up its proteomics capacity, was named in December as director of the brand new Blais Proteomics Center, a research facility funded by a $16.5 million gift to Dana-Farber from John and Shelley Blais.
In reality, though, plans are still in the earliest stages. Marto says the institute is just now considering renovations for suitable space for the center, which will be located in an existing DFCI building. “We’re at time zero here,” he says.
Marto came to the institute because he wanted his proteomics work to be more strongly linked to helping patients. “I was very motivated to bring state-of-the-art proteomics much closer to clinical research,” he says. “It’s my philosophical mindset that that environment is really the best way to both drive the science of proteomics itself and also to keep your eyes on the prize.”
Currently, Marto has five people in his lab, all of whom have joined since he hung his shingle at Dana-Farber. He’ll be ramping up the recruiting, though, as the new proteomics center gets underway. The center will have smaller projects that involve working with individual labs at the institute, Marto says, as well as larger-scale efforts in collaboration with other interdisciplinary science centers at Dana-Farber. “There’s a center for cancer systems biology at the institute that is looking at protein-protein interaction networks,” he cites as one example. “We hope to bring an analogous scale of proteomics experiments in line on that effort.”
Though it’s still early days at the Blais Proteomics Center, Marto has one definite message for all of his proteomics colleagues interested in tackling the vagaries of cancer: “Keep us in mind,” he says, “because we’ll be recruiting.”
Jim Heath envisions a day when cancer patients can monitor their disease in real-time, perhaps by using a fingerprick test similar to the way diabetic patients monitor glucose. To accomplish that, Heath says, a systems biology approach is key. “When you realize from a biological and clinical point of view what you’re trying to do, it really affects the technology that you’re trying to develop.”
The center’s co-directors include Hood and Phelps, both of whom share Heath’s view that cancer demands systems approaches. “We had been working towards using the goals of systems biology to tackle cancer, through both a fundamental knowledge and through enablingin vivo and in vitro diagnostics for a few years,” Heath says.
The current focus of the center is on “making high-affinity protein capture agents in high-throughput,” Heath says. With luck, that will lessen the time and cost it takes to develop a high-affinity antibody.
Heath’s approach is to identify several different molecules that will bind to a protein of interest with micro-molar affinity, then to modify these in a way that causes the formation of a bond between molecules. As a result, the affinity of the complete molecule is a product of the two moieties. This may result in a relatively small molecule that has strong binding to a protein that is discovered by a systems biology approach. “It would be pretty inexpensive and probably a lot faster,” Heath says.
The new center establishes a collaboration between investigators at Caltech, the Institute for Systems Biology, and UCLA’s Institute for Molecular Medicine and Johnson Comprehensive Cancer Center. While the academic wings of the center are working on basic and clinical research questions, Siemens Biomarkers Solution and the Homestead Clinical Corporation will handle the commercial outlets for any resulting technology.
In recent years, researchers have been hard at work developing microarray-based diagnostics for cancer. So far, Arcturus Biosciences was the brains behind the MammaPrint test and CUP/TUO technology, both of which are licensed to companies. Mamma-Print assesses breast-cancer recurrence risk, predicting the aggressiveness of breast cancer tumors based on the activity of key genes. Cancer of unknown primary technology, CUP, as well as TUO, tumor of unknown origin technology, can identify tumors of a type that occur in patients about 100,000 times a year in the United States.
But get ready — plenty of companies are in the midst of developing microarrays for cancer. Below is a sample of a few projects in the works.
• Since 2003, ExonHit Therapeutics and BioMerieux have been in the process of developing a breast cancer diagnostic that will enable the detection of cancer at a very early stage. This past October, the two companies announced they were extending their collaboration to other types of cancers, including colon, prostate, and lung cancers.
• Randox is developing a microarray for colorectal cancer and breast cancer.
• DiaGenic is working on a diagnostic for breast cancer.
• Ipsogen is developing a product it’s calling the Breast Cancer Profile Chip.
• Toray Industries and DNA Chip Research are collaborating on a number of cancer-related microarray projects.
• CombiMatrix Molecular Diagnostics is developing a diagnostic for malignant melanoma.
• Orion Genomics is trying to identify epigenetic biomarkers for developing early cancer screening and personalized therapy options.
To err is human, and sometimes this is a good thing — it can lead to insight. When researchers in the United Kingdom discovered flaws in their gyroscopes, it spurred them on to develop vibrating discs that can detect cancer.
“The mass imperfections were of the order of picograms or less,” says Calum McNeil, a professor of biological sensor systems at the university. “[We] recognized that if such a miniature structure could be affected by removal of such tiny amounts of mass, then we had the basis of a very sensitive biological sensor, if we could make devices which were deliberately affected by the addition of mass.”
Since then, the researchers have set out to create vibrating discs that can help diagnose and monitor common types of cancer. They have manufactured discs less than one-tenth of a millimeter in diameter and coated them with special patterns of DNA or proteins that cause cancer-specific markers to bind to the surface. The discs are made to vibrate electronically in two distinct ways, but at the same frequency. When a cancer-specific marker binds to the patterned surface of a disc, the uneven weight causes the frequency of these vibrations to change. By measuring this change, researchers can detect tiny amounts of cancer-specific marker.
So far, the scientists have studied thyroid cancer recurrence by monitoring the presence of mRNA for the protein thyroglobulin. A number of other cancer projects are also underway, including breast, cervical, and colorectal. “It will take four years to get to the clinical evaluation phase,” McNeil says.
While the researchers are focusing first on different types of cancer, they say the device could be used for a range of other diseases, including those caused by bacteria. This opens up the possibility of hospitals being able to screen new patients and visitors for MRSA, tuberculosis, and other diseases, so that doctors can try to minimize the dispersal of infectious agents in hospitals. The vibrating discs could even be used to detect particles from biological or chemical weapons, providing an early warning system against terrorist attacks.
Robert Gentleman has been at the Fred Hutchinson Cancer Research Center for a little more than a year, but the computational biologist really found his niche when philanthropists Bob and Pat Herbold donated $1.5 million last October to accelerate the nascent program that Gentleman was heading up.
One immediate use of the money will be to sweeten the pot for potential recruits. Gentleman has spent his first year at the Hutch setting up his lab and starting the long recruiting process associated with building a new research program. “This will give us the ability to hire more people more quickly — and provide them with better startup packages,” he says. He hopes to grow the program from its current staff — namely, him — to about 10 investigators. The single-person program doesn’t mean that he’s the only one doing computational biology at the Hutch, he says; in fact, there are plenty of people using the tools in their work. “In one sense, the purpose of setting up a program like this and giving it a name,” Gentleman says, is to give researchers an organizing force — which also “gives us the ability to go after bigger research projects.”
Gentleman came to the Hutch from Dana-Farber. His background is mathematical: “I was a real statistician with nothing to do with biology for about 15 years,” he says. While he was at Harvard, he stumbled onto “some pretty interesting problems” and found himself adopted by the biology community. He may be best known for writing the programming language R.
At the Hutch, Gentleman’s computational biology program focuses on software engineering for computational biology, as well as building “graph and network models for gene and protein interactions,” he says. Because his lab will interact with investigators in many other groups, his team will not select a particular type of cancer to study. Gentleman’s program is also the center for the bioconductor project, which promotes the development of open-source software for bioinformatics.
When Bob Strausberg describes his work on the mutational analysis of brain tumors, it’s a bit like listening to a genomic symphony — even the most technical notes lead back to a universal motif.
Last summer, Strausberg and collaborators at Johns Hopkins and the Ludwig Institute for Cancer Research published a report of the first comprehensive sequence analysis of the receptor tyrosine kinase gene family in glioblastoma brain tumors. Glioblastomas are the most common and aggressive form of primary brain tumors, and are particularly difficult to treat. Tyrosine kinase family genes play a key role in signaling between cancer cells and their micro-environment, and represent attractive molecular targets for therapeutic intervention.
The research team “developed and applied high-throughput DNA sequencing technologies and bioinformatics tools to peer into the genomes of glioblastomas in a manner that was previously unattainable,” Strausberg says. Their results indicate the promise of applying DNA sequencing technology to systematically assess the coding sequences of genes within cancer genomes.
Strausberg, who is currently vice president of human genome medicine at the J. Craig Venter Institute, has surveyed the progression of cancer research from a privileged position. After spending five years coordinating technology development for the Human Genome Project, he moved on to become director of the NCI’s Cancer Genomics Office, where he initiated both the Cancer Genome Anatomy Project and the Cancer Molecular Anatomy Project. Under his direction, the projects succeeded in cataloging all SNPs known to be associated with tumors, as well as connecting genomic information with efforts closer to the clinic, such as target identification and drug development.
“Cancer treatment is a revolution by evolution,” he says, predicting that the development of new drug compounds will be intimately tied to gaining a better understanding of molecular events specific to individual tumors. So while his recent study focused on glioblastoma mutation discovery, Strausberg stresses the importance of striving for comprehensive databases of all genome alterations in all common cancers.
Andy Feinberg characterizes the genome and the epigenome as something like land and sea, respectively. “We can kind of see a little bit from the shore, right around promoters and things like that, but there are vast oceans out there that we don’t understand.” Feinberg should know: he has been exploring these shores since happening upon them 24 years ago. Now, he says, the time is ripe “to get some boats and set sail.”
“I think there’s no question anymore that epigenetics is important, but now I think there are several important frontiers,” he says. The first has to do with identifying when epigenetic changes arise in relation to tumor development. In a recent paper, Feinberg and colleagues argue that such changes occur first. Prior to mutation initiation, which has traditionally been considered the first step in cancer development, Feinberg’s work suggests that an epigenetic disruption of progenitor cells lies at the root of cancer development.
Most of Feinberg’s work is driven by conventional epigenetic approaches, such as methylation studies and measures of allele-specific gene expression. However, as PI of the Johns Hopkins epigenetics genome center, Feinberg has set up collaborations to develop new high-throughput tools for array-based methylation analysis, chromatin analysis, and the measurement of allele-specific gene expression. The center’s main focus, though, is formulating methods to deal with the huge amounts of quantitative data generated by epigenomic analyses.
The final frontier, Feinberg says, is the extension of cancer studies to the genome level. “A lot of the work in cancer epigenetics is based on genes we know about, but many of the changes that occur in tumors could happen in places where we didn’t know to look.” Identifying new regions of interest — such as epigenetically regulated pathways — may lead the way to the development of new cancer therapies as well.
In order to get there, Feinberg has brought aboard a diverse set of researchers — molecular biologists, statisticians, genetic epidemiologists, and clinical investigators — to develop epigenomic tools for the study of cancer. These same tools may help other researchers interested in the field, he hopes. The recently proposed Human Epigenome Project, to which Feinberg is lending his expertise, may also pave the way to the creation of epigenomic approaches to understand and intervene in cancer progression.