Skip to main content
Premium Trial:

Request an Annual Quote

U of Edinburgh s Campbell Promotes Use of Arrays in Routine Blood Testing

Colin Campbell
Research Fellow
University of Edinburgh,
Edinburgh, UK

Name: Colin Campbell

Title: Research Fellow, University of Edinburgh, Edinburgh, UK

Professional Background: 2006 — present, research fellow, University of Edinburgh; 2003 — 2005, project leader, Scottish Center for Genomic Technology and Informatics at University of Edinburgh; 2001 — 2003, group leader, NTera Ltd, Dublin; 2000 — 2001, research associate, Imperial College, London; 1998-2000, research associate, University of Connecticut; 1997 — 1998, experimental scientist, Zeneca FCMO.

Education: 1997 — PhD, Coventry University, Coventry, UK; 1993 — BSc, chemistry, University of Edinburgh.

Colin Campbell is a research fellow at the University of Edinburgh who can often be found working with arrays at the Scottish Center for Genomic Technology and Informatics. He also turns up on occasion at conferences to give the community an update on what SCGTI is working on.

At Cambridge Healthtech Institute's PepTalk conference in San Diego in January, Campbell gave attendees a taste of the work SCGTI has been doing with the Scottish National Blood Transfusion Service to use the multiplexing capabilities of protein array technology to conduct routine blood screening in one assay, as opposed to current methods, which rely on many individual tests.

Campbell and colleagues capped their efforts with SNBTS by publishing a paper last month in Analytical Chemistry detailing the creation of their optimized cell interaction microarray for blood phenotyping, [Cell interaction microarray for blood phenotyping. Anal Chem. 2006 Mar 15;78(6):1930-8.] To learn more about the paper and the work in general, BioArray News interviewed Campbell this week.

In the paper, you say that, in the short term, protein arrays have a greater potential to impact diagnostics than DNA arrays. Can you give a few examples of why you believe this to be true?

Protein biomarkers are currently more widely established than, for example, RNA. Many more clinical assays measure protein levels than nucleic acids. In fact, there have been some protein microarray products available for clinical testing for several years, [like those available from] Randox and Biosite.

Then how come it doesn't seem like the technology is getting acknowledged in the marketplace?

It is probably because there isn't much of a market for DNA microarrays in clinical testing at the moment. I think the Roche AmpliChip was the first [DNA array] to get cleared [by a regulatory agency] anywhere for a standard clinical test. And that only happened last year. Whereas things like the Biosite chip for cardiac markers and the Randox system have been around for four or five years. People already test in clinical labs for protein biomarkers, but not for transcripts. There is some genotyping, and that's where the Roche AmpliChip comes in. But I don't think it's a big area.

You also mention some of the problems protein microarrays suffer from, such as generation and production of protein-based reagents. In your opinion, what's the objective for the technology and what are the challenges?

If what you want is to have an array of different proteins, then you want them for most applications to be functionally active. If you are making a protein array for measuring proteins — sort of a protein analytical array — then most of those proteins are going to be antibodies or antigens. And you'll want them to be correctly folded and functional and all of that stuff.

That's not a problem that you have with DNA microarrays. You can design a probe and you don't have to worry about it unfolding because the molecular recognition is purely based on its primary molecular structure. And the big problem with protein microarrays is that the molecular recognition is based on secondary and tertiary interactions. So if the protein has been put on the surface and it gets unfolded or it gets bound in the wrong alignment, like it's upside down with regard to its binding site, then it just screws everything up.

The challenge is to be able to make protein reagents that are stable to attachment and which can be attached in some kind of alignment. Those are really the two big things, that the reagents you attach don't unfold and that you can some way attach for the right alignment.

It seems that there is a drive towards engineered protein array probes. This might be in the form of nucleic acid aptamers, engineered antibody fragments or even peptide aptamers. These are all going towards that kind of area.

Is it a priority?

I think it is one of the biggest areas to be overcome. Especially taken in context with the question of density, because the selection of antibodies for arrays is just not trivial and maybe only 25 percent of the antibodies that you put in are going to be suitable. So really, you'd like to have some type of rational way of screening out the antibodies that you want rather than just doing it by getting a big library of antibodies and screening through them to find out which ones actually work on a surface. Because what we've found is that some which work in a solution-phase assay don't work that well on a surface. It's not well-understood why that happens.

[But] the need for density depends entirely on application. There isn't a need to measure thousands of proteins — or hundreds — for any clinical application. However, if protein arrays are to complement DNA arrays in applications where global profiling of a proteome is required, the density becomes much more important. The density [question] is related to the previous point on reagents, since availability of specific antibodies, or other probes, for such large groups of targets is much harder than generation of large libraries of oligos.

Perhaps you can describe how you evaluated different surface chemistries and what some of the results of that evaluation were. I am also interested in knowing why you went with gold-coated slides.

We chose a group of surface chemistries which we thought were representative, for example containing physisorption, covalent attachment, hydrogels, and a metal-coated surface. We spotted them with the same group of antibodies, carried out a binding assay and chose on the basis of both the best signal-to-noise characteristics and spot morphology.

Surprisingly, gold proved to be the best. It seems that there is some surface-enhanced fluorescence with a gold slide. There have been similar observations with other metallic slides and also with some on which an optical grating has been patterned; however the mechanism of enhancement is probably not the same in all cases.

How did the project with SNBTS come about?

For a little while they had labs which were adjacent to our labs. They wanted to look at ways of blood typing using microarrays, and not only blood typing, but taking all the tests currently done on a blood donation and putting them onto an array. So not just blood typing, but there is the potential with protein arrays to look for blood-borne pathogens as well.

Where did you get the blood from?

We obtained blood from SNBTS. SNBTS have been a partner throughout the project and are a market leader in supply of reagents for blood typing. In the course of the project we probably printed 2,000 protein arrays, always using our in-house spotter.

What were some of the experiments you ran on the antibody array platform?

We carried out typing of the major blood groups and through a process of performance optimization got good enough signal-to-noise characteristics that we could rely on the intrinsic fluorescence of the cells as a readout method.

The specific experiments that we ran were to first of all get the surface chemistry right, and then to screen through a library of antibodies — which SNBTS produced which are well-characterized in solution, but hadn't been characterized on a surface — and find out which of those antibodies give the best array performance. And finally we had to screen out several factors — like, what was the best blocking solution to use, what were the best incubation conditions, whether you want to have low saline phosphate or standard phosphate buffered saline — to try and figure out what gave the best assay performance. We also wanted to figure out the best way to do the scanning, and how to get the best results through our scanning protocol.

Eventually when we had very high signal-to-noise ratios we figured out that we could do all of this detection without any fluorescent dye in the red cells at all, so doing it in a label-free manner. But we only managed to get to that by doing all of the other experiments which gave us very high signal-to-noise.

How did you present your results to SNBTS? Have they given you any hint on where they intend to take this next?

We have had weekly meetings with them all throughout the project and they've really been a collaborative partner in the research. The commercialization of it for transfusion medicine is really in their hands. So I think they are interested in working more on it.

Do you think protein arrays are robust enough to support widespread usage by an agency such as SNBTS?

I think there's a bit of development to do in order to achieve the standards that an agency such as SNBTS requires, but I also think it's possible. There are challenges in automation of both the experimental procedures and the data analysis and there is very little margin for false results in donor testing.

It comes down to a couple of things. DNA arrays are really good for high information assays. So you get an awful lot of information out of it, but they are not exactly high-throughput; you don't put through a lot of samples. But with protein assays for clinical use you want less information because you may be only measuring 10 markers, but you want to be able to measure hundreds of samples per day. And that's a new area for arrays.

Automation is the big challenge — to do it again and again and get it processed in a high throughput manner. There's a whole other challenge — which is analyzing the data. You want that to be fully automated. You don't want that done by user-intervention basis. We've done it so far on a user-intervention basis which involves using QuantArray to analyze the spots, but we've developed some software to be able to analyze those things much faster.

Will it lead to future publications?

I think there will be some publications on algorithms used in data analysis in a high-throughput way and also extension of this method to other cell types.

File Attachments
The Scan

Researchers Compare WGS, Exome Sequencing-Based Mendelian Disease Diagnosis

Investigators find a diagnostic edge for whole-genome sequencing, while highlighting the cost advantages and improving diagnostic rate of exome sequencing in EJHG.

Researchers Retrace Key Mutations in Reassorted H1N1 Swine Flu Virus With Avian-Like Features

Mutations in the acidic polymerase-coding gene boost the pathogenicity and transmissibility of Eurasian avian-like H1N1 swine influenza viruses, a PNAS paper finds.

Genome Sequences Reveal Evolutionary History of South America's Canids

An analysis in PNAS of South American canid species' genomes offers a look at their evolutionary history, as well as their relationships and adaptations.

Lung Cancer Response to Checkpoint Inhibitors Reflected in Circulating Tumor DNA

In non-small cell lung cancer patients, researchers find in JCO Precision Oncology that survival benefits after immune checkpoint blockade coincide with a dip in ctDNA levels.