Skip to main content
Premium Trial:

Request an Annual Quote

Harriet Feilotter, Director, Queen s University Microarray Facility



PhD in 1990 molecular biology, Queen’s University, Kingston, Ontario.

Studied cell cycle control and human molecular genetics, one year at the Paul Nurse lab at Oxford.

Completed postdoctoral fellowship, served as principal investigator at Cold Spring Harbor Lab, studying the genetics of manic depression.

Directs the DNA diagnostic laboratory at Kingston General Hospital as well as the Queen’s University Microarry Facility.

QHow is the facility set up within Queen’s University?

AThe facility runs out of the Department of Pathology, and is set up to provide services to any basic and clinical researchers within Queen''s, as well as other users. Basically, I have set up and optimized the procedures for the production of labeled cDNAs from RNA populations given to me by researchers. I do the hybridization, scanning, and preliminary analysis. The extent of analysis is dependent on each individual researcher and how much help they require after receiving their data. Because we don’t have a bioinformatics component completely in place yet, I am trying to recruit from other departments and resources around this area to deal with the bottleneck of data analysis.

QWhat types of chips/arrays do you use? Do you make your own arrays or do you use pre-fabricated arrays?

AWe currently use genome-wide cDNA chips that are manufactured by others, largely the Ontario Cancer Institute lab in Toronto. We are just starting to include oligonucleotide-based chips in our repertoire. Within the next year, we are planning to expand to be able to make arrays ourselves.

QWhat methods do you use to analyze microarray data?

AAfter perusing the literature and following the current debates on the best way to analyze microarray data, I have come to the conclusion that there is no right way. Any method that one applies to microarray data will be subject to some major criticism.

Since I am not a biostatistician, and since I have limited access to bioinformatics people at this time, I have worked out a simple procedure based on a combination of expert suggestions. I carry out a normalization by subarray, followed by ranking of the fold changes in expression. I give these data to researchers, and spend time explaining that the only way to gain meaningful information is to replicate experiments, and then to chase down potentially interesting interactions via another experimental system. In other words, at this time we are using microarrays as a tool to guide hypothesis formation, which is then tested by another means. Although the potential use of microarrays is much greater than that, I do not feel comfortable going beyond this level of interpretation until chip printing, target labeling, hybridization, and spot scanning parameters are better controlled than they are today.

QWhat is the biggest challenge you face in working with microarrays?

AThe issue of reproducibility of the raw data is probably the single biggest factor that concerns me working in this field. Chip-to-chip and experiment-to-experiment variation are still quite large, and it is likely that this situation will not be resolved until there is better standardization and control at the level of printing chips, and at the level of hybridization and scanning of the arrays. While analysis remains a problem, I don''t see it as the same type of challenge because good solid raw data from a scanned chip can be subjected to all types of analysis. But if the raw data themselves are suspect (i.e., because of uneven spots, difficulty in calculating the average intensity of any given spot, uneven incorporation of dyes into the target DNA populations, etc.), then all the analysis in the world won’t correct that problem and you might as well go home.

QIf you could make out a wish list for microarray technology advances or improvements over the next couple of years, what would it be?

AOne, I would like standardized printing of slides, with even, small spots with no across-spot variability (or at least less variability). Two, good internal methods of controlling each individual chip. For instance, quadruplicate spots for genes on different areas of the slide. Three, elimination of the requirement for incorporation of bulky fluorescent molecules into the target DNA (i.e., scanning by measurement of some parameter that is not optically based), and lastly, the ability to strip and reuse chips so that replicate experiments could be done on exactly the same chip.

The Scan

Germline-Targeting HIV Vaccine Shows Promise in Phase I Trial

A National Institutes of Health-led team reports in Science that a broadly neutralizing antibody HIV vaccine induced bnAb precursors in 97 percent of those given the vaccine.

Study Uncovers Genetic Mutation in Childhood Glaucoma

A study in the Journal of Clinical Investigation ties a heterozygous missense variant in thrombospondin 1 to childhood glaucoma.

Gene Co-Expression Database for Humans, Model Organisms Gets Update

GeneFriends has been updated to include gene and transcript co-expression networks based on RNA-seq data from 46,475 human and 34,322 mouse samples, a new paper in Nucleic Acids Research says.

New Study Investigates Genomics of Fanconi Anemia Repair Pathway in Cancer

A Rockefeller University team reports in Nature that FA repair deficiency leads to structural variants that can contribute to genomic instability.