Skip to main content
Premium Trial:

Request an Annual Quote

PerkinElmer s VP of R&D on Informatics for the Total Solution

Premium

As vice president of life sciences R&D at PerkinElmer, Neil Cook oversees all aspects of the company’s product development. BioInform recently spoke to Cook about the role of informatics in the company’s strategy, its plans to extend its bioinformatics offerings in the future, and the importance of data standards for the future of the industry.

 

How is informatics organized within PerkinElmer? Does each instrumentation area develop and market its associated informatics tools, or is there a hierarchical structure that oversees these efforts across the organization?

PerkinElmer is a total solutions provider, so informatics is key in providing those customer solutions. We have four fundamental pillars that we look to expand on, and informatics is one of those pillars, together with liquid handling, detection, and labeling chemistry. So the informatics solution is always factored into what we want to achieve when we’re working on a particular application.

Clearly informatics begins in the laboratory, so managing and exporting your data — statistical informatics at the data level — is a vital element. And that level of informatics is handled at the individual platform level. That level of informatics is important to us because that’s what we call connectivity. That allows all of our instruments to be able to communicate with each other ultimately through whatever framework the customer requires to be used. If you look at products such as our ArrayInformatics, they have higher levels of informatics capability. While you’re taking data, you’re organizing the data in certain ways. There are levels of statistical informatics that appear in specific project packages and in specific products. Ultimately we will be looking to higher levels of informatics hierarchy, where we start to bring together what you would call true bioinformatics — and that is in silico experimentation off parallel data streams.

Within PerkinElmer, because we perceive informatics as being so important, we don’t organize or report our informatics business opportunities or capabilities separately. It’s embedded in our organization.

How did last fall’s acquisition of Packard BioScience impact product development at PerkinElmer?

One of the benefits of the acquisition was we both complemented and grew the mass of our software capability, both in what you would call pure-play informatics — more like the ArrayInformatics level — but also in our core software expertise and development capability. So the acquisition of Packard BioScience was very additive in this regard. There was no duplication.

 

So would you say the integration process between the product lines and staff is complete?

At the level of software, I’d say it’s complete in terms of integration. The process we’re going through now, six months after the acquisition, is we’re finishing off the projects that existed in the portfolio and will now begin to meld those resources on our new project portfolio, which we’re in the process of looking at at the moment.

 

You said that ArrayInformatics is an example of a pure-play bioinformatics solution. Are you looking to move deeper into this area in the future?

It depends on our contribution in a given application area. Microarrays, and particularly in the future, protein arrays, are areas where we have a major competitive advantage. In protein arrays, I would expect us to provide a very complete total solution that would include the informatics analysis. If you look at protein arrays, we provide whole levels of capabilities for all four pillars that I mentioned: We would supply the liquid-handling capability that processes the chips, we’d supply the labeling capability and the reagents, the chemistries and consumables that allow the protein chips to work. We’d provide the detectors. And to make that solution come together and be of the highest potential value to the customer, we need to bring the informatics to analyze that data and allow data processing in a very facile manner for the customer.

 

What partnerships does PerkinElmer have with bioinformatics vendors?

We’re very open to working with people who are experts in particular domains. If you look at Nonlinear Dynamics, they are particularly powerful in proteomics applications. From our perspective, because we are a waterfront company, we want to provide the best solutions in all areas. Some we choose to do in house, where we have particular expertise or capability, and others we partner out.

 

What are the key bioinformatics challenges in the near and long term, and how will PerkinElmer meet them?

One of the areas that’s beginning to emerge, but it’s perhaps not as respected yet as it needs to be, is what I would call statistical informatics. A lot of data that’s being taken at the moment is not qualified at the point of capture. Sometimes in that first level of informatics the data is more digital. It can be bad and therefore rejected, which leaves holes in the database. But the data that gets in there is not always to a standard that can be selected. So the big challenge is, for instance, if you’re trying to look at microarray data that’s done in different labs or even on different systems, there’s no consistent way of baselining that data from a statistical perspective. So if there’s garbage data in there, and you can’t spot it or filter it out or weight it, then you may want to keep the data but say there’s a bit more risk in using that data set. There’s no capability to do that.

So as we go forward, if we’re going to truly enable independent labs to be able to share and utilize data in an in silico environment, there needs to be a set standard that can be used as a tool at the level of statistical informatics so that you can use other people’s data in a constructive manner.

 

Is there a standard out there you’re ready to adopt?

No. This is an area we’re investigating, and clearly this is something that needs to be driven with a lot of detailed knowledge from the customers, the people who are actually going to use it. An accepted standard is not something you can impose, so we’re trying to engage the users in that discussion. Ultimately, by working closely with those customers, our aim would be to help set those standards. They set the standards, but we provide the tools that match their needs.

 

So the MIAME (Minimum Information about a Microarray Experiment) standard doesn’t touch on the statistical level of capturing data?

Not at the level that I’m envisaging, but the fact that those standards are emerging begins to identify the need that’s becoming manifest to the users. It only comes when people are trying to establish large databases. It happened in the Human Genome Project, for instance. Standards were set for annotating and accepting and rejecting sequence data. And it will come in these other fields of big biology. I think microarrays are likely to be next.

Filed under

The Scan

Booster for At-Risk

The New York Times reports that the US Food and Drug Administration has authorized a third dose of the Pfizer-BioNTech SARS-CoV-2 vaccine for people over 65 or at increased risk.

Preprints OK to Mention Again

Nature News reports the Australian Research Council has changed its new policy and now allows preprints to be cited in grant applications.

Hundreds of Millions More to Share

The US plans to purchase and donate 500 million additional SARS-CoV-2 vaccine doses, according to the Washington Post.

Nature Papers Examine Molecular Program Differences Influencing Neural Cells, Population History of Polynesia

In Nature this week: changes in molecular program during embryonic development leads to different neural cell types, and more.