Skip to main content
Premium Trial:

Request an Annual Quote

J & J s Sergey Ilyin Marries Bioinformatics and HTS for Functional Informatics

Premium

In an effort to bridge the gap between target discovery and target validation, a research team at Johnson & Johnson Pharmaceutical R&D has merged bioinformatics with high-throughput screening to create a new research strategy they’ve dubbed “functional informatics.” Led by Sergey Ilyin, bioinformatics group leader, the Spring House, Pa.-based R&D group found that integrating several technologies that are traditionally isolated in “research silos” was the best approach to functionally characterizing target molecules. Specifically, Ilyin’s team combined high-throughput screening with engineered libraries of siRNA molecules to enable whole-genome functional screening and speed the target validation process. BioInform caught up with Ilyin recently to discuss some of the details of this work.

Can you first describe what you mean by “functional informatics”?

High-throughput screening technologies are well developed and robotic systems capable of accommodating thousands of data points per day are deployed at major pharmaceutical companies. Some of these systems are also designed to conduct high-content cell-based screening and are able to generate comprehensive sets of data for biological processes of interest. One of the current limitations in the industry is in the validation and selection of novel targets and biomarkers as companies are under pressure to develop products based on novel mechanisms of action to sustain growth. Historically, the combination and integration of experience and perspectives from different disciplines resolved many scientific and business-oriented limitations. Here, at J&J, we approached some of the challenges by combining and integrating traditionally separate areas of drug discovery, namely high-throughput screening, proteomics, and bioinformatics, and we call this paradigm “functional informatics.” By doing so, we leveraged our original investment in HTS equipment and expertise in the areas of bioinformatics and proteomics.

In one of the examples of this model, we performed screening with libraries of siRNA molecules in essen-tially the same way as we would do with compound libraries, except that an appropriate time is allowed between siRNA transfection and biological testing; siRNA functions as a specific gene-based inhibitor, and genes with effects on biological process of interest are selected for TaqMan and microarray-based expressional profiling. Then pathways of interest are constructed using a combination of gene expressional data and proteomics tools.

How do you conduct this pathway analysis? Did you develop your own tools or are you using commercial software?

We’re using a combination of tools. We have a collaboration with OmniViz, and they did some custom work for us, but we’re trying to use basically everything that’s available and not very expensive. I think that for this project the actual data itself is more important. We have to generate the data, and that is the major effort.

How long has your group at J&J been relying on functional informatics?

The development of the concept was a gradual process. Our initial motivation was to develop an automated platform for microarray data validation by RT-PCR as we considered this as a bottleneck at some point. We gradually learned to automate and integrate other complementary processes.

So the microarray data validation is no longer a bottleneck?

It’s no longer a bottleneck, but it was a process in which we started to merge different technologies, and by doing so, one day we moved to cell-based assays with siRNA and now we are trying to incorporate additional processes with it. But that was our initial motivation. That’s how this started. Even though it’s a very simple concept, it is not well addressed because traditionally it’s done in a different way.

Basically, it’s a very cost-effective approach. There’s already a significant investment in the area of robotics, and traditionally this investment was motivated by the opportunity to screen libraries of small molecules. But this equipment is applicable to many other processes, and as it gets more sophisticated in terms of volumes — as we can perform more complicated types of screens with several parameters on the cells — it makes it a very rational step to couple it with other steps, like functional genomics, for example.

Originally we started with a need to mass-produce plates for TaqMan, and in the process of doing so, we moved into other directions, some of which were more successful than others. This is not the only process that we tried to automate, but it’s something that we found rewarding, something that is working because we did extensive validation.

Can you discuss some specifics of how you used siRNA libraries with the HTS equipment? Do you design your own siRNAs for this process?

siRNA libraries are constructed using a combination of different algorithms and methods relying on in-house and external expertise. Sequences to target are selected based on microarray studies, in silico pathway analysis, and opportunistic acquisitions of commercially available libraries.

How has this approach proved advantageous in your work?

The approach allows efficient validation of targets and biomarkers for any biological process of interest as long as it can be efficiently and accurately modeled in cell-based systems. In vivo is still a big challenge and we are trying to develop effective and scalable technologies to address it.

What kind of improvement in terms of productivity or data quality does the siRNA screening provide?

You can think about that in the following terms: Biology 10 years ago was basically about generating data, and analysis was not an issue because you were dealing with only a few data points. Then, all the excitement of microarrays changed the perspective — data generation became a very easy process, but data analysis became a serious challenge. It took a considerable amount of time before people learned how to do it properly. But then the next challenge was the interpretation of this data. So even though we can generate a wealth of data, we fairly soon discovered that it’s difficult to get functional meaning from this information. So the advantage of the approaches based on siRNA and high-throughput technologies is that you can measure the impact of the gene on the biological process of interest. So you can focus on a therapeutic area and find a gene or family of genes that affect the biological process. Then, of course, you can combine it with microarrays and proteomics tools and actually [elucidate] novel pathways.

Looking forward, what are the primary challenges you see ahead in bioinformatics?

Bioinformatics will be increasingly involved in linking data derived from genomics, metabolomics, proteomics, and other approaches to model biological pathways of interest to provide a more comprehensive and integrative understanding of target biology.

How does what you’re doing under the name of functional informatics differ from what others seem to be calling systems biology?

That we can argue! [Laughs.] I like the term, but it’s a personal preference, so I wouldn’t argue one way or the other.

Filed under

The Scan

More Boosters for US

Following US Food and Drug Administration authorization, the Centers for Disease Control and Prevention has endorsed booster doses of the Moderna and Johnson & Johnson SARS-CoV-2 vaccines, the Washington Post writes.

From a Pig

A genetically modified pig kidney was transplanted into a human without triggering an immune response, Reuters reports.

For Privacy's Sake

Wired reports that more US states are passing genetic privacy laws.

Science Paper on How Poaching Drove Evolution in African Elephants

In Science this week: poaching has led to the rapid evolution of tuskless African elephants.