Skip to main content
Premium Trial:

Request an Annual Quote

Jrg Hoheisel, Head of DKFZ Functional Genome Analysis Division

Premium

AT A GLANCE

Received his PhD in physical biochemistry from the University of Constance, Germany.

Completed a two-year postdoctoral fellowship at the Imperial Cancer Research Fund in London, followed by three years at the ICRF in functional genomics.

Interests focus on developing the use of arrays and other assay techniques to analyze molecular interactions in living organisms.

Last Friday at the IBC Microarrays for Diagnostics Conference in San Diego, Jörg Hoheisel, head of the division of functional genome analysis at the German Cancer Research Center (DKFZ) in Heidelberg, Germany, spoke about his work to develop improved array-based assays for DNA and protein analysis.

Hoheisel discussed a DNA cancer chip he has developed, with 12,000 PCR products, including 5,000 known genes and 7,000 unique genes from a collaborative project with Roche and Merck. “The idea of this array was to identify those genes relevant to [cancer] diagnostics, then move to a low-density, low complexity chip,” he said.

He also spoke about the lab’s recent move back to radioactive labels from fluorescence to gain sensitivity — a move which allowed them to find twice as many genes in a cervix carcinoma with the cancer chip. Now, the group is beginning to use PNA, a synthetic alternative to DNA probes that requires no labeling or sample amplification. When a target binds to the PNA probe, the binding is detected through the presence of phosphates, which PNA does not include but DNA and RNA do. The phosphates are measured with mass spectrometry, which is sensitive enough to detect very low levels of target molecule, Hoheisel said. But the group has not yet optimized the surface chemistry for this new PNA-based array system.

On the microarray analysis front, Hoheisel said he favors methods that don’t just cluster data, but actually can map the results of microarray experiments to the phases of the cell cycle, as well as those that combine epidemiological data with the more traditional clustering results.

Recently, Hoheisel’s group has begun beta testing a custom high-density arraying machine from febit of Mannheim, Germany. This benchtop instrument combines DNA synthesis, microfluidics-based hybridization, and CCD hybridization detection, and makes chips with 30,000 features in four subunits. The group has also been working on antibody arrays.

At the conference, BioArray News caught up with Hoheisel to discuss his work.

What do you think the biggest challenge is with working with microarrays?

The interpretation. We have so much data. Usually just a minor part is published and the rest is dumped into the database, never to be used again.

Once there are established standards about data analysis that are commonly used, the MIAME (minimum information about a microarray experiment) criteria, the idea is to reuse the data.

 

How do you find the MIAME standard? Is it tough to get all that information into your database?

We actually require more information ourselves, partly because we want to use the annotations for analyzing the data. The more data we have, the more we can hopefully learn from the annotation. In terms of a minimal agreeable standard, [MIAME] is a reasonable amount. Less would not be good, because otherwise, you would be able to look at your data and I would be able to look at my data but we would not be able to compare.

 

In your talk, you said you have gotten positive results from febit’s array machine. Can you elaborate?

The main advantage of the machine is the flexibility. I can make one chip today, another chip the next day. Never mind what sort of chip, I do what I want. I can also learn empirically. I can synthesize the appropriate chip, do some analysis, and find out that out of 64,000 oligos, 10 percent doesn’t work. So I throw them out, and redesign for those 10 percent new oligos, synthesize a new chip, and go ahead to form the optimum array for whatever sort of analysis I can do.

Also the fluidics system gives you advantages. The hybridization occurs in very limited volume. I can move that volume across the oligos and I can do PCR on the chip. Each element can go down [on the chip] the way I like it. We can put double-stranded pieces of DNA on the chip, starting from the oligo then doing a polymerase reaction on the chip and eventually ending up with double-stranded piece of DNA on the spot. That’s good for SNP analysis or protein DNA interactions or for exon identification. You could, for instance, start with an oligo at the end of an exon, and given any population of RNA, hybridize this population to the chip, then do a polymerase reaction, then we check what was the next base you can find: Is it after exon three, exon four, or exon 3A, which has not been annotated in the human sequence? This sort of thing can be done.

 

Did you ever use Affymetrix?

Not really. We have the system in house, but I am more interested in [developing arrays] myself. Originally in 1991, we used filter arrays for mapping, then for transcriptional profiling at a low level, with several thousands of molecules on filter arrays. I have been in the array business for fourteen years now, starting as a postdoc. I entered the Imperial Cancer Research Institute in 1989. I knew the PhD student in [Ed Southern’s] lab that did the work.

 

Having seen where DNA arrays started out and where they are, are there any lessons for protein array technology?

Sure. At the moment, there is the same hype as there was with the DNA arrays. And there are multiple problems with the arrays. For instance, there is no problem with spotting antibodies, which are simple molecules, but to make them work properly is a totally different ballgame. And to interpret the data is going to be much more complex than interpreting DNA data. It will take more effort to get this going. It will eventually work. But it will take five years.

 

Now in your talk, you said that there were no surface chemistries that worked well for antibody arrays...

We have found a couple of chemistries that seem to work with the more global approach. Extracting many proteins from a given tissue, labeling them, then incubating them with the antibody array, you have a lot of problems due to the differences between the proteins. They are bound to each other in complexes. They are non-specifically bound to each other. They have different hydrophobicities. Do you incubate for an hour or do you incubate for 20 hours? After twenty hours, the small ones will be bound but the big one is just about to be bound. If you incubate for an hour the big one will be fine but the small one will not be bound at all. So there is a lot to be done.

The Scan

Shape of Them All

According to BBC News, researchers have developed a protein structure database that includes much of the human proteome.

For Flu and More

The Wall Street Journal reports that several vaccine developers are working on mRNA-based vaccines for influenza.

To Boost Women

China's Ministry of Science and Technology aims to boost the number of female researchers through a new policy, reports the South China Morning Post.

Science Papers Describe Approach to Predict Chemotherapeutic Response, Role of Transcriptional Noise

In Science this week: neural network to predict chemotherapeutic response in cancer patients, and more.