Name: Toumy Guettouche
Title: OncoGenomics Core Manager, University of Miami
Professional Background: 2007-present, manager, Oncogenomics Core Facility, Sylvester Cancer Center, University of Miami School of Medicine; 2007, global product support scientist, Digene, Gaithersburg, Md.; 2004-2005, senior research associate scientist, Bayer Institute for Clinical Investigation, Bayer Diagnostics Division, Berkeley, Calif.
Education: 2002 — PhD, biochemistry and molecular biology, University of Miami; 1995 — BS, biology, University of Potsdam, Germany
Up until last week, when Toumy Guettouche needed to work on a microarray platform, his choice was limited to the Illumina BeadStation, which had been the sole array instrument in the lab he manages, the Oncogenomics Core Facility at the University of Miami's Sylvester Cancer Center.
Now, Guettouche is busy moving his core's resources, which include an Agilent Technologies BioAnalyzer and a NanoString nCounter System, to a new building that will house the combined cores of the Oncogenomics facility and the Miami Institute of Human Genomics' Center for Genomic Technology, providing a vastly expanded menu of microarrays and other genomic technologies.
With access to nearly all the major array platforms as well as growing number of next-generation tools, including Illumina's Genome Analyzer and the nCounter, which he helped validate, Guettouche has a good view of the changing genomic research technology landscape.
To get a better sense of the way core managers are integrating the latest technologies into their offerings, BioArray News spoke with Guettouche last week. Below is an edited transcript of that interview.
What is your background?
I have a bachelor's from Potsdam University in Germany and I was awarded a Fulbright Scholarship to study in the US, where I remained to pursue my PhD. By the time I finished my PhD at the University of Miami, I pretty much knew that I didn't want to be a PI and spend my time pursuing grants. I always liked technology, so I did a short postdoc in Miami in a viral oncology lab.
After that, I went to Bayer Diagnostics, which used to be Chiron, where I was part of Bayer's Institute for Clinical Investigation. We were working with diagnostics customers developing diagnostics assays. Our hepatitis C assay, for example, is probably still the most accurate in the world, and is still used by the Siemens reference testing laboratory as a reference assay. [Siemens acquired Bayer's diagnostics division in 2006. — Ed.] If all [other] tests fail, they use that test to genotype and subtype HCV. We did smaller projects with customers like the Mayo Clinic developing applications based on existing platforms or the platforms that one of the customers had.
I worked there for almost two years. Then I went to Germany for a year and did consulting, and then I worked for Digene for six months, where I was a global application support scientist. It was not exactly what I was doing at Bayer, but I was working with customers and I was involved in the development of next-gen molecular diagnostics assays.
Then I got a job offer from the University of Miami's Sylvester Cancer Center and the job I have here is pretty damn good. I think it’s the best job I have ever had. I get to play with the latest technologies and I helped built the Oncogenomics Core Facility up from scratch. UM has raised more than a billion dollars and invested it in recruiting new faculty and building new facilities. We now have the Miami Institute of Human Genomics, the Interdisciplinary Stem Cell Institute, and a new head of pathology from the University of Southern California. UM is also trying to build up a biotech park. UM has really turned around and they are trying to build something special in South Florida.
What are some of the main kinds of services the Oncogenomics Core facility provides?
Previously the Miami Institute of Human Genomics' Center for Genomic Technology and the Sylvester Cancer Center were separate resources here. We now have a partnership between our two core facilities and, between the two of us, we have almost every available technology on the market. In terms of array platforms, we have Roche NimbleGen, Affymetrix, Illumina, and Agilent. We have a next-generation sequencer from Illumina, and we are going to buy two more next-gen systems now, and by end of the year we'll have a 454. We are also trying to get a “third-generation” sequencing system. Next week our combined genomics facilities will be moving to a completely new building.
We are also interested in developing next-generation molecular diagnostics assays. The core can do proof-of-principle studies on these platforms and then hand over the assay to a clinical molecular diagnostics lab for validation. We will definitely be using the NanoString platform and next-gen sequencers here, but I don’t see us using microarrays for gene expression there.
[ pagebreak ]
Who is using the facility?
Basically, the main users are PIs at the University of Miami. We have some industry clients and clients from other universities like Moffitt and UCF. We started to get requests from biotech industry. Smaller biotechs want to see if we can help them with their needs. We also have industry collaboration, developing molecular diagnostic tests with industry.
What array platforms do you use, and why do you use them?
I think microarrays in general are going to be pushed aside. It seems that everything is moving towards next-generation sequencing technology. By the end of this year, I assume that prices for whole transcriptome sequencing will be comparable to what we pay for arrays these days. So we are not investing in gene expression and genotyping array systems any more. We are investing in sequence capture and CGH but not gene expression or genotyping. That will be replaced by next-generation or third-generation sequencing once prices are comparable.
The main hindrance to this process is the bioinformatics support. You need a lot of storage and analysis capability to handle the data and data analysis software development is still in its early stages. You have to establish pipelines that go from the instrument to computers where you analyze the data, and you have to figure out if you should even keep the raw data images or not. We are struggling with storage and analysis of the data. The analysis and the price of data storage have kept us from going to next-generation sequencing on a significantly larger scale. Once those issues are solved, microarrays will be a thing of the past, maybe only for some niche usage or for lesser funded institutions that don’t have the funds to build the infrastructure that is needed for next-gen sequencing. I think most well-funded big institutions will change to next-gen sequencing.
I know you are also using NanoString's nCounter platform. Why did you bring this in house, for what applications is it best suited, and who has been using it?
When I saw it, I was initially very skeptical, but after I became educated on the technology, I liked that it requires a small amount of input and you can assay hundreds of genes. If you have pathway analysis and you have only limited sample available you can still analyze it. With real-time PCR, you'll run out of material. There was little hands-on time needed compared to qPCR. Your technicians don’t have to be highly skilled to do it. It's easier to run NanoString than qPCR applications.
Also, in qPCR there is the possibility of contamination. You have to have two different rooms for pre-PCR and where you actually run PCR. If you don’t have the automation, it creates a lot of hands-on time and the technicians have to be good with pipetting. If you do large-scale qPCR, you need automation, which is quite expensive. With NanoString, since there's no amplification required, there is no chance of getting a contamination problem.
In what kinds of projects is that being used?
We have one user who wanted to basically create an NFkB panel to distinguish subtypes of ATL tumors. There are lymphoma and leukemia types and certain treatment differences and they wanted to see if they could subtype different ATL tumors. We are also working on an HPV genotyping assay. We are also looking at viral genes. There is another project where we distinguish aggressive from non-aggressive forms of ovarian and breast cancers. But right now you can only do gene expression assays on NanoString.
I know that they are developing copy number analysis and microRNA profiling applications …
There is not that much out there for CNV analysis. Because you can do so many different assays at the same time on NanoString, you could do a CGH study and verify those with CNV assays. You likely can check more CNVs on one NanoString cartridge than you can with one TaqMan array. MicroRNA, on the other hand, is a very competitive landscape, but NanoString has a big advantage — they have fewer problems with bias due to lack of amplification and the ability to use crude lysate as starting material. The purification and reverse transcription of RNA often leads to some bias.
[ pagebreak ]
Have you tried to use it for validating second-gen sequencing data?
We are actually doing a couple of projects where we'll sequence transcriptomes and then try to identify genes that are different or important and then see if we can validate those results with NanoString. We have one project where we are looking at translocations and [trying to] see if certain translocations play a clinical role. It's much easier [and] faster to run gene-expression assays on NanoString than it is on next-generation sequencing. The data is much easier to analyze.
There is a huge amount of data that comes off the next-gen sequencing system and it takes time to do one run. The major problem is that the data streams are enormous and it's difficult to handle those. Work needs to be done to create software that analyzes data more efficiently and is user friendly and enables you to store the data that you get. I don't see a huge change in the next two years or so.
I heard you helped optimize the nCounter System. How did that go?
They had some issue with reagents initially. One thing they didn’t appreciate enough is the difference in climate between Miami and Seattle. Although we have air conditioning here, some [protocol] that worked in Seattle did not work in Miami when we did it. Now it’s a pretty solid system. We have developed a web-based open source tool for those who use [NanoString]. It streamlines the analysis they recommend so you don’t have to use Excel spreadsheets manually anymore. It's quite tedious when you do that kind of analysis by hand. For normalization of the data we are using qPCR algorithms like GenNorm and NormFinder. Because it’s a new platform, there's not that much stuff available for analysis. But I liked the technology. I am in charge of new technology assessment and so I am looking for technologies that bring us ahead and that basically allow us to be ahead of curve.
Are there any applications where you think arrays will continue to dominate?
The main things that will survive are capture technology and array CGH. For capture, I think microarrays will be the front end of next-generation sequencing. Another thing is niche applications. For example, maybe for studying FFPE samples arrays could survive. But, quite honestly, I don't think that arrays will be around for a long time. I think next-generation sequencing will take over most of the applications, especially if it gets cheaper and easier to use. There is clear evidence that sequencing is better for genome-wide association studies. For gene expression, by using arrays you are losing all those splice variants and interesting point mutations. You just get so much more data from NGS than arrays.
Right now, only the well-funded institutions can afford the infrastructure for sequencing, while poorer institutions don’t have the means to get that infrastructure. Those are the users who will probably stick with arrays in the near future.
What about the use of arrays in molecular diagnostics?
In the molecular diagnostics market, the major players are pretty far behind in terms of the technology they are using. They are in some cases five years or even more behind. They are just now looking at microarrays and seeing how you can use them. There are some smaller diagnostics firms that are trying to develop tests on newer platforms, but most large test makers are just now optimizing qPCR-based assays. It is expensive to optimize a new technology platform. It will take someone to develop actual diagnostics that are useful on a next-gen platform. Maybe a smaller diagnostics company might take one platform with an assay to market and then a bigger diagnostics company might buy it and decide to take the assay to market. That's how these newer technologies might reach the marketplace.