Skip to main content
Premium Trial:

Request an Annual Quote

Van der Spek on Converging In Vitro and In Vivo Data in 3D

Peter van der Spek
Director
Erasmus University Medical Center

Last month, the Erasmus University Medical Center in Rotterdam, the Netherlands, launched a bioinformatics center that will serve the clinicians in this huge medical facility and begin bridging the silos of genomic and proteomic data with the clinic.

This success of this facility is of interest to the molecular biology tool makers as it is the testing and proving grounds for the convergence of 'omics technologies with the zero-fault-tolerance clinical arena.

The new Erasmus MC bioinformatics center combines a powerful IT backbone with a virtual reality projection area where genomics and proteomics data and digital imaging data — ranging from ultrasound to MRI — can be viewed in three-dimensions to help clinicians better understand patients' conditions and accelerate diagnostic classification and surgical intervention strategies.

BioCommerce Week spoke with Peter van der Spek, the director of the center and the head of bioinformatics, to learn how the center works.

Where do you see this convergence of genomic, proteomic and clinical data? Is it science or art?

That is a big question. Overall, what we see is that the fields of molecular sciences and the more clinically oriented sciences are moving more and more closer to each other. The reason we have strategically implemented a bioinformatics group here is that clinicians have very little affinity with IT. However, these clinicians want to be able to compare, in a very short period of time, in one snapshot, an overview of the patients they have seen before, and they want to position the new patients that they are surveying. So information technology becomes more important in honing their clinical decision-making process and making it more accurate. Now you are able to look at scans for diagnostics purposes. And, with all the new tools, it becomes easier to diagnose in earlier stages of disease.

What is the toolkit that is providing the data that you are talking about?

We support microarrays — we are strategic partners with Affymetrix — and we support proteomics. Basically, the same medical disciplines that apply to genomics for their research also apply to proteomics. And, we work with a couple of software vendors who are all partners of Affymetrix.

The main driver for the strategic alliance with Affymetrix is that we have patients. None of the pharmas that are all applying the technologies have patients. And, as of 27 December, the FDA approved the use of Affymetrix equipment for clinical trials. That was interesting because at one moment they had one piece of Affymetrix equipment certified for trails. Now suddenly the technology is recognized by the FDA and will be accepted for clinical decision-making.

We are also heavily invested in Bruker technology — LC/MS and TOF [mass spectrometers] — and are heavily involved in connecting these Bruker apparatus directly onto tools like Omniviz and Spotfire. For genomics experiments there is a lot of software out there. For proteomics, there is hardly anything available — even the tools that come with the equipment that is purchased are not fully developed. That is an area where the technology is rapidly improving and is going to add important data, however it is absolutely in its infancy.

How does this all tie together?

The doctor has clinical data on this patient that has a progressive disease. They check things like blood pressure, number of white blood cells — all kinds of clinical parameters that are being stored in the clinical patient information system in the hospital. Now what the doctors want to do is not only measure all of the genes on the microarray but also they want to couple their clinical observations with what they have on the microarrays. And in order to facilitate that, they use the bioinformatics department.

How do you operate?

We couple clinical data onto the molecular data that comes from microarray or proteomics experiments. Since we opened our department, we decided we wanted to bring in virtual reality technology because more and more scientists are going to apply functional MRI. So, they are interested in a particular marker, and they want to stain that marker and visualize it in a scan and mimic mutations they have found in their patients in mouse models. So we have a very large collection of mouse knockout models that mimic certain diseases. Now, of course, what we want to do is overlay our results from genomics and proteomics from the mouse models with the human situation. We are capable with the virtual reality center to actually overlay the results in the mouse study with the arrays over the over the same genes, in the syntenic regions in the human, and illuminate the genes in different regions on the chromosome that are not conserved. So, basically the order of the genes is determined by synteni and that helps us in adding priority in the most significant biologically relevant changes in the microarray study.

So the doctors can see this all from the center?

No, they can see it from their desktops. Certain doctors go to this facility and they meet with biologists, meet with molecular biologists, chemists, pharmacologists, and then they discuss the data in our data analysis center. They organize multidisciplinary data discussions, on the larger genomic and proteomic studies where samples are being correlated with the clinical parameters.

Making 3-D scans is not new. However examining the scan in a 3-D cinema in a hospital is a debut for Rotterdam. This was made possible by the Economical Development Board of Rotterdam. For the neurosurgeon who is looking for technology where you can optimally study up front where a tumor is positioned for an operation, you can only see that in a real 3-D environment. To do that, we decided to bring in heavy computational power for looking at genes and proteomics data, and we use same computational power to look at very clinical data, and we use the same computational power for driving the graphic support. So the investment is done once in the computer infrastructure but you use it in two areas. But always, the priority is given to clinical observations first because if there are time-dependant decisions, the research has to wait. We have the hardware infrastructure that this does not interfere at all.

Will it take any extra training for the doctors?

We have three technologies — Spotfire, Inxight and Omniviz. Inxight is very interesting — it has a learning curve of zero. Of course, the doctor has no time to learn it, and has no interest. He wants to click a few buttons and have an overview. That is what we reach with the web technology from Inxight. The data analyis tools such as Spotfire and Omniviz, we use in our core team to come to our results wrapped with web technology and then we parse it back via the Intranet. The doctors can just use their browser and find results. That helps them navigating through results and prevents them from being exposed to complex statistical and mathematical procedures. They see more in our theater than on conventional screens, according to the feedback that I've received from the cardiologists and radiologists.

What do tool makers need to know about this convergence?

The molecular biology tools are going to become more and more important in the clinical visualization field. In regards to particular biomarkers that are being discovered, you should know that all pharma are looking at them in parallel with their drug discovery process because they have to go to individualized treatment. Biomarkers and new drugs, you want to monitor in vivo and visualize in the MRI scans. We are gradually experiencing what are the win-wins of visualizing certain scans in 3-D. Molecular Imaging is now the buzz word, illustrated by General Electric Healthcare's latest acquisition of Amersham having tracers in its portfolio.

What does this cost on a per-patient level?

We have looked at that. It is not what does it cost, but what does it bring. We are talking with insurance companies. We did a big survey of 300 leukemia patients and we can define a certain subset in the patient cohort that we can treat with retinoic acid, a vitamin A derivative. If you give it to a particular leukemia patient, it pushes the development, the differentiation, of the white cells. But it works only in a small fraction, from 5 percent to 7 percent of the patients. With microarrays, we can discriminate which patients can benefit from this treatment. Out of 300 patients, we eventually could detect 12 patients we could give this drug. With a conventional strategy, you could not identify them. The patient I could not recognize with the conventional strategy, I would give a bone marrow transplant and that costs €150,000 ($193,000). Of course, the insurance company is interested in me running an array. They would say: 'If your array costs €2,000, we don't care.' You save €150,000 from not having to give a bone marrow transplant.

Given this world, what needs to happen with the tools to enable this?

Integrational data becomes more and more important. Daily there are papers coming out. Once researchers have analyzed their array data, they are not going to be going back half a year and asking which papers have been published, and which genes have been found? They don't have the time and infrastructure to do it. So an encyclopedia of the genes automatically updated and connected to array data is essential. We have the infrastructure to realize this now. But I am confident that large pharma is struggling with annotation, which is a big bottleneck.

What do you see for mass specs?

Diagnostic classification is easily in reach. The sensitivity of these systems is so much more confident. However the interpretation of the individual masses is difficult because the databases that are used to study what molecular weight is derived from what peptide are very poor because we don't know enough about which splice variants naturally occur. So how can you predict or say this molecular weight comes from this protein if you don't know what splice variants are present in your sample? So, you start to reach that by having the integration of genomics, transcriptomics, proteomics, and, besides that, metabolomics.

Don't underestimate metabolomics because in a large hospital the large volumes of small molecules being studied are very important — especially with all the metabolic disorders.

So is the bioinformatics center the most important area in this convergence?

I wouldn't say our group is the most important, however we sit as a spider in the middle of the web. Currently, ours scientists are happy with the level of data integration we can offer them. All of the clinicians who have their molecular biologists do experiments suffer once they have long lists of genes back on their desk. It's virtually impossible to see what is the coherence in these lists. Now they can all study their pathways using Ingenuity. Erasmus MC has announced a strategic development collaboration to focus on usage of Ingenuity databases not only for genomics but also proteomics applications. The scientists throw their lists of genes against this database and get back networks showing the biological relationships that are occurring within the list.

What is the next level? If you go from gene to list, to a network, subsequently, one wants to go to the literature that describe these pathways. There is an enormous win-win here if you integrate software that is used for looking at arrays, with software that can visualize the relationship between the genes, described in the literature using PDF files. You go to the networks describing your biological relevant network, then go straight to the literature, not the abstract, but the full paper.

What do the tools look like in five years?

The software is easier to use and to couple to different databases, and it becomes easier for scientists to cross borders and work cross-disciplinary — to look out of their own field because of the higher levels of data integration. That is driven by the rapid progress of the web technology in parallel with the rapid development of genomics and proteomics technology.

File Attachments
The Scan

Support for Moderna Booster

An FDA advisory committee supports authorizing a booster for Moderna's SARS-CoV-2 vaccine, CNN reports.

Testing at UK Lab Suspended

SARS-CoV-2 testing at a UK lab has been suspended following a number of false negative results.

J&J CSO to Step Down

The Wall Street Journal reports that Paul Stoffels will be stepping down as chief scientific officer at Johnson & Johnson by the end of the year.

Science Papers Present Proteo-Genomic Map of Human Health, Brain Tumor Target, Tool to Infer CNVs

In Science this week: gene-protein-disease map, epigenomic and transcriptomic approach highlights potential therapeutic target for gliomas, and more