Skip to main content
Premium Trial:

Request an Annual Quote

Mark Duncan on Collaborative Proteomics with Clinicians Using a Core


At A Glance

Name: Mark Duncan

Position: Professor of medicine, department of pediatrics, endocrinology, diabetes, and metabolism division, University of Colorado Health Sciences Center, since 2004.

Director, University of Colorado Cancer Center Proteomics Core, UCHSC, since 2004.

Professor of biochemistry and molecular genetics, UCHSC, since 2000.

Background: Director, Biochemical Mass Spectrometry Facility, UCHSC, 1999-March 2004.

Director, Biomedical Mass Spectrometry Facility, University of New South Wales, Sydney, Australia, 1993-99.

Director, Biomedical Mass Spectrometry Unit, UNSW, 1990-93.

Post-doc in neuroscience and mass spectrometry, National Institute of Neurological Disorders and Stroke, NIH, 1987-90.

Post-doc in neurochemistry and mass spec, Garvan Institute, Sydney, 1987.

PhD in neuroscience and mass spectrometry, UNSW, 1987.

BS in organic chemistry, UNSW, 1979.


How did you first get involved with proteomics?

I’d spent many years applying mass spectrometry in a clinical setting in Australia. But until the advent of techniques like secondary ionization mass spectrometry, MALDI, and electrospray, I was restricted to looking at low molecular-weight compounds. So it was both exciting and biologically relevant to progress to high molecular-weight compounds when the techniques became available.

How did you get involved with the University of Colorado core?

I did a post-doc at the NIH, and after leaving there, where I’d worked with Sandy Markey and Irv Kopin, I returned to Australia to take up an academic appointment, and I built a mass spectrometry facility there. Then I was recruited from there to come to sunny Denver and develop a proteomics facility of the same sort of type to meet the needs of the community here.

Tell me about the recent changes at the facility.

We were originally located in the School of Pharmacy. As of early this year, we’ve moved to the School of Medicine. I view that as a very positive move, because my clinical colleagues are an important component in the proteomics exercise. Proteomics is a potentially powerful tool, but its full potential is only realized when practitioners work hand-in-hand with clinical investigators. Now, as a faculty member in medicine, I have a joint appointment as a professor in endocrinology and pediatrics. We have very close ties to our clinical colleagues, and a much better opportunity to tie in with the clinical expertise and design the studies and obtain the samples that we need to realize the promise that proteomics has. The move increases our interactions with our clinical colleagues and our group views this as a very positive transition. We now have several clinicians training in our lab and these interactions will undoubtedly increase.

The facility has not changed in terms of the hardware that we have available, but it’s changed in terms of its mission. The primary changes that are important are that I now have an academic appointment and the freedom and the opportunity to pursue scientific problems as an investigator while spending 10 percent of my time directing a proteomics resource that meets the needs of the users on the UCHSC campus. So 90 percent of my time I do my own research, which clearly centers around proteomics, and 10 percent of the time I direct a resource that we’re building up to meet the needs of the wider community.

We have two [Thermo Electron] LCQs, two Applied Biosystems MALDI-TOF instruments, a Q-STAR, and a range of the Amersham — now GE — proteomics workstation capabilities for 2D gel work. We do Amersham 2D DIGE routinely.

What services do you provide, and who generally uses them?

I think the trouble is that proteomics is more complicated and non-routine than most of our colleagues appreciate. So having a core that does ‘proteomics’ is exceedingly difficult to deliver. And the expectations of the users are frequently unrealistic. They’ve seen that genomics is relatively automated and routine, and they expect the same of proteomics.

What we try to do is work collaboratively with our colleagues to provide them with information from the beginning — from the study design, to data at the end that they can interpret. So rather than a fee-for-service capability, we’ve tried to offer a collaborative research environment, where they can come to us and discuss their needs and we can play a part in designing the study from beginning to end. That’s not what some of them expect — they don’t want you being intimately involved in the experiment, and seem to believe they don’t need you. But they usually do. Most people don’t have a good understanding of what proteomics can and can’t offer, and they don’t appreciate the subtleties of experimental design and data generation, and data review — the intrinsic areas in the process.

Can you give me an example of how this collaborative process works?

It generally begins with someone coming to my office and saying, ‘we work in, for example, cystic fibrosis, and we’re interested in how proteomics might impact upon our research — can we talk about that?’ In an ideal setting we would then sit down and have an hour-long meeting with that person about how we see that we might assist them, and that really means contribute to the area of cystic fibrosis diagnosis and treatment. And then we would invite them to attend our weekly lab meetings, where they would hear other clinical investigators talk about their research — what they’re doing and how they’re doing it, how they’re interacting with our lab. Then, after a period of time, they would have a better appreciation of what can or can’t be done. Then we would start generating some preliminary data with them, with a view to subsequently writing a full proposal for their work that would be submitted for funding, and we would be an integral component to that submission. We tend not to take over the work from them, but to make sure that sensible experiments are designed that provide useful data at the end. If the only part in the process is to run the samples that they drop at the door, invariably that exercise is most[ly] meaningless.

Tell me a little about your own work.

Our objective is to do something useful, that either helps in the diagnosis or in the treatment of patients. So perhaps more so than some other proteomics facilities, there’s a very focused mission on the clinical aspects in what we do. I think we therefore do two things — we try to improve the methods that are available to contribute to proteomics, because, as I said before, they are relatively immature and there are many areas of improvement. And we try to apply what we’ve got to clinical problems. At the moment, I’d say that the main areas of research in my group are cystic fibrosis, diabetes, cancer research — primarily in the areas of thyroid, prostate, and lung cancer — and some studies on cardiac disorders.

[We look to] find biomarkers of disease, prognostic markers of patient outcome, [or] approaches to sub-classification of disease, based on biochemical parameters. Can we sub-classify, for example, thyroid cancer, into distinct biochemical forms — thereby coming up with better therapeutic protocols to target that specific form, rather than the heavy-handed approaches that are often used. I think many of us [also] dream of identifying new drug targets, and improving therapeutic strategies. In reality, that’s not going to happen all the time, but I don’t think there’s been a proteomics study that we’ve done where we haven’t come up with data that helps the clinicians better understand what’s going on.

You said the technology is relatively immature — what are you working on in that area?

Several things. We’re interested in alternative samples. There’s been a tendency in clinical medicine to look at blood and plasma. We’re interested in looking at alternatives such as tears, saliva, and urine, as biological samples that might give us insight. And then on the analytical side, we’re especially interested in more precise quantitative methods. Just getting lists of what proteins are present in a sample is probably of little value in most instances. The quantitative methods have been slow to evolve and almost invariably imprecise. It’s hard to pick up subtle changes, and so we’ve spent a lot of time on that. The other area that I think is weak is in the area of algorithms for protein identification. We’ve spent a lot of time on improved algorithms for protein identification and automation for data analysis.

What are you working on in quantitation?

The technique that we’ve adopted routinely and are very positive about is the Amersham DIGE strategy for dual labeling. We routinely use that as a first pass to quantitation. But we also have been developing mass spectrometric approaches that allow us to go in and quantify individual proteins to confirm those findings. We use structural analogs, and sometimes stable isotope label compounds as reference standards for that quantification. But during clinical studies, you can’t really go in and do some of the things you do in cell culture. We don’t have the opportunity to feed deuterated water to our patients or give them C13-labeled amino acids. You have to be more innovative about how this is done, and the key is to improve precision. To be able to measure something with a coefficient of variation of 30 percent is not good enough in most [clinical] settings. The bottom line is, we’re very happy with the DIGE technology but we’re always looking for other approaches for confirmation of those findings. [These are] label-free in the sense that we don’t label the organism, but we might incorporate labels at the end in the analytical strategy.

The Scan

Study Finds Sorghum Genetic Loci Influencing Composition, Function of Human Gut Microbes

Focusing on microbes found in the human gut microbiome, researchers in Nature Communications identified 10 sorghum loci that appear to influence the microbial taxa or microbial metabolite features.

Treatment Costs May Not Coincide With R&D Investment, Study Suggests

Researchers in JAMA Network Open did not find an association between ultimate treatment costs and investments in a drug when they analyzed available data on 60 approved drugs.

Sleep-Related Variants Show Low Penetrance in Large Population Analysis

A limited number of variants had documented sleep effects in an investigation in PLOS Genetics of 10 genes with reported sleep ties in nearly 192,000 participants in four population studies.

Researchers Develop Polygenic Risk Scores for Dozens of Disease-Related Exposures

With genetic data from two large population cohorts and summary statistics from prior genome-wide association studies, researchers came up with 27 exposure polygenic risk scores in the American Journal of Human Genetics.