Skip to main content
Premium Trial:

Request an Annual Quote

U of Rochester s Brooks Reports Back on Annual ABRF Conference

Premium

Several sessions at this week’s meeting of the Association of Biomolecular Research Facilities in Savannah, Ga., focused on the remaining challenges and new developments in the use of microarrays. On Sunday, Andy Brooks, director of the Functional Genomics Center at the University of Rochester Medical Center, chaired a roundtable discussion on microarray platform compatibility where five industry representatives answered questions from an academic panel. He also helped conduct the Microarray Research Group’s 4th annual survey of microarray facilities, as well as a comparison of commercially available kits for small sample amplification, results from which were presented on Monday afternoon. BioArray News spoke with Brooks this week.

What was the purpose of the roundtable discussion, and who did you invite?

The roundtable idea [came] from the ABRF executive board. Given that a lot of their research groups [deal] with gene expression, one of the challenges is being able to compare data across platforms, [and] they wanted to have a roundtable to discuss those issues. [We assembled] an academic panel to generate questions [ahead of time for] an industrial panel, which was comprised of the heads of informatics for GE Healthcare, Agilent, Nimblegen, Applied Biosystems, and Affymetrix. The academic panel then asked the questions at the roundtable and the audience was able to comment.

What are the greatest challenges in comparing platforms?

[What] one manufacturer says you are measuring vs. the other with respect to annotation is the largest challenge because gene X doesn’t necessarily mean gene X with respect to annotation. That is the big challenge, it was the underlying theme.

Also, sensitivity of platforms and how these measurements are being made is also important.

What were the most interesting answers from the industry panelists?

All the industrial panelists echoed the same thing in that the annotation responsibility [while] it falls on the experimentalist now, may be a good resource [for] collectively generating a database that describes differences in annotation between different probe sets and the probes that are selected to represent those genes.

Everybody agreed that would be an excellent resource for the community.

The suggestion is that they would all be willing to participate in the development of the central resource database that would look at annotation information for respective platforms.

The academic panel asked the industrial panel [if it is] possible to develop a weighting metric [that] will _ allow for more successful comparisons across platforms.

The collective answer was that you don’t need a weighting metric. What you really need is better annotation information to make sure that either platform is querying the same thing and that RefSeq is incomplete, the annotation of the genome is incomplete, and we need to have a better annotation of the information that exists.

[They also asked for] advice for how to compare new protocols using the same array platforms and what level of correlation or overlap is acceptable when referring back to historical data for a new product or protocol.

The collective answer from industry to a number of different respondents was that historical data is something that everybody has to deal with as new protocols [are released.]

They don’t see historical data, quite honestly, as being an issue. If these protocols are being developed to be improved to allow for small sample input, like with NuGen Technologies, or using other sample types like LCM [laser capture microdisection], it’s only going to expand understanding of data and not hinder us and that comparability is really in the eye of the biological beholder comparing at a biological level.

They also asked for the panel’s opinion on how sensitivity may affect data interpretation and affect cross platform comparisons.

The panel said that you walk the line between sensitivity and specificity. A form that’s more specific is going to sacrifice sensitivity; a platform that’s more sensitive may or may not be more less-specific.

Was there any discussion of splice-variant arrays?

We talked a lot about splice-variant arrays and other designs, and everybody felt that as the regions for detection get more restricting as they are with splice variants and looking at tiling arrays that the data across platforms is going to be more comparable because there’s less room for interpretation as to where to put your probe sets.

Tell me about this year’s annual survey of microarray facilities — how did it differ from last year’s?

What we added this year was a detailed section on bioinformatics tools, on future directions and emerging technologies, and also on protein arrays.

How was this year’s response?

This year we had over double the number of respondents of any previous year. We had well over 200 respondents, which means in my mind that the number of facilities and people doing this work is growing. Another big change is that greater than 75 percent of these facilities are all full-service facilities. These labs are actually doing the work from start to finish, and over 50 percent of them offer some kind of education programs.-

What are the main trends or changes compared to last year’s survey?

People are using different kinds of informatics tools. They are taking that more seriously, they are educating themselves in informatics for understanding their data. Before they used packages, but they are using different packages [now]. Everybody is getting away from using Affymetrix’s in analysis. If you looked at the biggest challenge of any of these facilities, it’s bioinformatics.

The second biggest challenge in this community is still funding, getting the money to do the kinds of experiments that you need to do.

What did you find out about how people use protein arrays?

People mostly are using antigen arrays. Those are homemade. The second biggest ones are antibody arrays [which] are split between homemade and custom arrays.

What about the new bioinformatics section?

[We asked] what products [people] use, who analyses the data, whether it’s a statistician or the individual PIs, whether or not people keep their data in a database or they keep it locally. We also wanted to know what percentage of people that do these kinds of experiments are submitting their data to public databases. Only 35 percent of people doing microarray experiments are submitting them to public data sources.

Also we found that software that people are putting up for free is used as frequently as commercial software packages. You would think that with all the different commercial packages out there people would rally around them, but a lot of people are still using shareware packages, because they find them to be more intuitive or come out more quickly.

How about the new section about future technologies?

The future microarray technologies [section] is broken down into a number of different areas. We were looking at novel array applications, and we asked questions about solution-based arrays, CGH [comparative genomic hybridizations] arrays , CpG island arrays, and splice variant arrays, and got very positive feedback about interest for all of these different array types. We also talked about new array technologies, we asked questions about three-dimensional surfaces, array on array technologies, and also SNP arrays. The last section was focused on clinical applications, whether or not laboratories are willing to or prepared to run clinical samples or diagnostics in their labs. We got a lot of feedback on that.

Can you elaborate on the feedback you received?

The responses were quite staggering. Basically half of the microarray field and half of the gene expression field are positioning themselves to turn their laboratories into clinical diagnostic labs.

We had 108 respondents to that part of the survey; 108 core facilities. Readers in 50 percent [said that they] want to be prepared to run clinical samples. Over 70 percent of them have no knowledge of the government programs, such those from the National Institute for Standards and Technology, that are going to be critical for their ability to run platforms. Sixty percent said they wouldn’t outsource this technology if it was a clinical reality — they want to run these things in house. 51 percent of laboratories that responded have already begun reviewing and interpreting what it would take to make labs compatible for running these kinds of assays.

Were there any dramatic changes or surprises compared to last year’s survey?

Just the fact that some of the larger facilities are getting larger. The smaller facilities I think are [growing]. The facilities that are in existence are doing more different things and really expanding the technology base. I think a lot of people are looking for new technology, new applications, which I think is good, I think the market is growing.

Did you see a trend away from certain technologies?

A little bit from custom arrays, that market is getting a little bit smaller. The number of respondents was smaller.

Tell me about the small sample amplification study — what did you compare, and whose product fared best?

We looked at a lot of commercially available small [sample] amplification kits. We compared Ambion’s two round of amplification kit with [kits from] Affymetrix and NuGen Technologies, Arcturus, and Enzo. We wanted to look at it for small sample amplification; things that we can do for 20 nanograms or less of starting material. We did some detailed experiments that looked at the reproducibility, the sensitivity, the accuracy, and the functional or biological relationship of how these different kits perform, how easy they are to run, what the costs are, what the time savings are, what the results look like.

We found that all the small sample chemistries that were tested showed good to excellent reproducibility within each technology. Within the reproducibility in these small protocols — when [used] appropriately, the protocols are very robust within their own right. However, we found that there are significant differences in sensitivity between small sample amplifications.

Enzo, in two rounds of amplification, was the least sensitive and NuGen’s Biotin protocol for Affy arrays was the most sensitive. We found that the cDNA hybridization, in terms of NuGen, measured the largest numbers of differentially expressed genes. All of those were validated when compared to other small sample technologies.

We also found that there were differences in protocol workflow, and ease of use for technicians using the protocols for the first time

The Scan

Not Immediately Told

The US National Institutes of Health tells lawmakers that one of its grantees did not immediately report that it had developed a more infectious coronavirus, Science says.

Seems Effective in Kids

The Associated Press reports that the Pfizer-BioNTech SARS-CoV-2 vaccine for children appears to be highly effective at preventing symptomatic disease.

Intelligence Warning on Bioeconomy Threats

US intelligence warns over China's focus on technologies and data related to the bioeconomy, the New York Times reports.

PLOS Papers on Campylobacteriosis Sources, Inherited Retinal Dystrophies, Liver Cancer Prognosis

In PLOS this week: approach to uncover source of Campylobacteriosis, genetic risk factors for inherited retinal dystrophies, and more.