Bioinformatics firm Genedata and high-content screening provider Cellomics have taken steps to further integrate their HCS software products to meet the requirements of a market that officials from both firms claim is rapidly becoming more sophisticated.
Next week at the Society for Biomolecular Science conference in Montreal, Genedata will present a new “workflow” it has developed in collaboration with Cellomics for importing data from HCS experiments into its Screener software, while Cellomics plans to announce a new HCS analysis product that can integrate more effectively with Screener and other third-party tools.
Mark Collins, senior product manager for informatics at Cellomics, a subsidiary of Thermo Fisher Scientific, said that the company’s customers have been asking for tighter integration between their HCS software and the rest of their drug discovery IT infrastructures. He called this a good sign because “it shows that the technology of high-content [screening] is very mature now and people trust the results and they want to use those results in making decisions about compounds or targets.”
Collins noted that Screener is one of a number of tools that drug-discovery groups currently use to manage traditional biochemical screening data, so it made sense to work with Genedata to ensure that the integration was as seamless as possible.
Collins noted that the project is “not a formal partnership,” but rather an arrangement under which the firms collaborate with mutual customers to link their software products. The relationship is built upon a software integration project that the two companies carried out for Serono and disclosed late last year [BioInform 12-15-06].
“We’ve worked together with Genedata to make our software export data in a format that their software can read in,” Collins said. The HCS workflow also works the other way, via a web-based link from Screener back to the image-based information in the Cellomics software, he added.
Underlying the integration capability is a new data standard that Cellomics and several collaborators are developing called MIAHA, short for Minimum Information for a High-Content Assay. The proposed standard, modeled after MIAME and its multitude of “minimum information” offspring in the bioinformatics community, is a combination of two existing standards: MIACA (Minimum Information About a Cellular Assay) and the Open Microscopy Environment.
“The OME standard is really good at describing optical stuff and things to do with microscopes and images, but it doesn’t really have enough richness to describe the kind of data you get from a cellular assay,” he said. “Whereas the MIACA standard has nothing about microscopy in it.”
Cellomics and its collaborators, including researchers at the German Cancer Research Center, merged the two standards into a single standard to address the requirements of high-content screening. Cellomics will be hosting a special interest group meeting at SBS next week to further discuss the proposed MIAHA standard.
Collins said that the company was able to use MIAHA to create a “round trip” that starts with the Cellomics HCS software, exports that data to Screener for analysis, “and then [is] able to reach back to look at the images, which is what everybody wants to be able to do.”
Kurt Zingler, head of US business for Genedata, said that the ability to link back to raw cellular images is one thing that distinguishes Screener’s HCS capabilities from its traditional HTS features.
“In a typical primary assay, you may measure one value in a million compounds, but in a high-content assay, you may have 50,000 or 100,000 siRNAs or compounds that you’re testing, but you may have five to 25 parameters that you’re testing.”
In a biochemical screen, he said, the information on the plate or well “is really just a number and there’s nothing to look at.” In a high-content assay, on the other hand, “the scientist at this point usually wants to go back and look … at the image or the set of images that was used to make that call.”
Zingler said that Screener also includes improved capabilities for multiparametric analysis to meet the demands of the HCS market.
“Our understanding is that people are putting things together and finding the problems with them and starting to move toward systems that are more adept at handling multiparametric data,” he said.
“In a typical primary assay, you may measure one value in a million compounds, but in a high-content assay, you may have 50,000 or 100,000 siRNAs or compounds that you’re testing, but you may have five to 25 parameters that you’re testing,” he said.
“Generally what people have done with high-content assays is treat them as a standard high-throughput assay,” he noted. In the example of nuclear localization, he said, “they’ll pick one number out of those 20 different parameters and say, ‘This is our measure of nuclear localization and what we’ll use to determine a hit or to determine activation.’”
However, Zingler said, the HCS field is moving toward an understanding that it needs to take all of those parameters into account and determine which combination of parameters is most informative. “Instead of just grabbing the one thing, we’ll actually rank the 20 parameters that they may be measuring and say which of these parameters is most relevant for identifying the positive reaction or the looked-for reaction,” he said.
Zingler compared the approach to that used in microarray analysis, where researchers would prefer to discover single-gene or single-protein biomarkers that indicate whether a patient has a disease, “but what they’re really getting out is four or five things that together really give you a picture.”
Genedata is “doing the same in the high-content world to say, ‘Let’s make use of all those parameters and figure out what combination of those parameters actually gives us the best answer and the most predictability,” he said.