As a professor of medical history and ethics at the University of Washington, Wylie Burke studies the ethical implications of genomic information and research. Soon after she gave a talk at the annual HUGO meeting, Genome Technology’s Ciara Curtin caught up with her to discuss the ethical issues that crop up in clinical genomics. What follows are excerpts of the conversation, edited for space.
Genome Technology: In your talk at the Human Genome Meeting, you spoke about genetic testing. What are the issues in testing people for genetic variation, especially when a gene can play multiple roles?
Wylie Burke: My talk was specifically about pharmacogenetic testing. That’s the situation where we do genetic tests for the very specific reason that is to guide better use of drugs. We already have a clear idea about what the purpose of the test is, but we need to recognize — and that was the point I was really making — that as we get the information we’re seeking, we may also get other information that we don’t want or, more to the point, we may get other information and then we have to decide whether we want it or not — if it’s an added benefit of the test or harm. We’re really just starting to think about that issue. We’re just at the early stages of trying to define, from a policy perspective, how would you make that determination? Number one, is there added information? And, number two, does it have clinical significance, either as a benefit or as a harm?
GT: Do researchers have an obligation to tell study participants if they have an increased risk for a disease?
Burke: If the research study generates information that has the potential to be significant from a health perspective to a patient then there is, from an ethical perspective, an obligation. So how do you determine that? There have been a couple of panels that have deliberated on that point and the general conclusion is that you are looking for three properties. The first is that the genetic test is a valid measure of risk. That is, it’s been well established; it’s been duplicated in a couple of different populations. If that risk has significant implications for the person’s health. If there are things the person can do in order to improve their health outcome.
So the example would be, you’re doing research and in the process of your research, you discover a marker that identifies someone as having an increased risk for colon cancer and based on this risk knowledge, the person would benefit by earlier or more aggressive colon cancer screening than would be offered to the general population. I think when you have genetic information of that sort, you can make a pretty strong argument that the researcher has an obligation to disclose that information to the participant.
GT: Does this obligation include past study participants? What is the duration of that obligation?
Burke: I think you have to weigh the benefits to the individual with the burden to both the researcher and, more generally, society. From a practical perspective, it doesn’t seem realistic to say that the researcher has a never-ending obligation. We’re doing research now in 2007 and we’ve realized that, based on this research, that research we did in 1990 identified people with a significant health risk. As to the researchers’ obligation to carry that long, most of the time, that would be unrealistic. It would certainly be important to look at context. For example, maybe the researcher is working with the same cohort over a 20-year period; then you could imagine the obligation being more significant. On the other hand, if they did a study 17 years ago [and it] was completed 17 years ago, and they’ve had no further contact with the research participants, it’s hard to argue that obligation is still there.
GT: There’s something you mentioned in your talk about testing for the apoE gene and Alzheimer’s disease. Could you elaborate on that?
Burke: ApoE is an interesting gene. Variants in that gene are associated with risk of atherosclerotic heart disease. People started studying this gene very much from the perspective of cardiovascular disease risk. Then they discovered that there were two particular genotypes in this gene. People who have two copies of the apoE4 variant or one copy of 4 and one copy of 3 have an increased risk of Alzheimer’s disease. Now, what’s really important is there’s no question that this is a validated risk. But there is nothing to do about it. If you go back to the original criteria: Is the risk validated? Yes. Does it have significance for the person’s health? Yes. Is there something to do in order to improve outcome, reduce risk? No. So it doesn’t meet those basic criteria that tell us there is an obligation to disclose. Arguably, a researcher would not have an obligation to disclose. It is now such a well-established risk factor that I think you would have to tell people if that was part of your research study, prospectively.
I would argue, in fact, that including apoE testing to identify cardiovascular risk is not appropriate because we have very good ways of measuring cardiovascular risk without using apoE genotype and, if we do that test, we generate the additional information that may be unwanted, that could be stigmatizing.
GT: How should a clinical researcher manage the expectations of and educate the patients enrolled in a study?
Burke: I think there are two prominent issues. One is the issue of therapeutic misconception. It is really important if clinical research is primarily for the purpose of adding information to the field and not benefiting the patient, that they know that. It’s really important that if it is going to generate genetic information that there’s a plan in place about disclosure and that the informed consent process explains what that is. If the information is going to be clinically significant and lead to potential clinical actions, then you really can’t justify not disclosing it.
The other general issue is the dual-role issue. Clinical geneticists are often the only real expert, the only kind of clinician taking care of people with certain kinds of rare diseases. They are often based in academic centers and they might be pursuing research also. And the research is research. The obligation of the researcher is to make sure that the study is done well in order to get the answer that the resources have been invested to get. The obligation of a clinician is always to do what is best for the patient. Those two may sometimes be in conflict. Someone else should do the informed consent. The clinician should never be in the position of persuading the patient to participate in the research. It’s just mixing the roles in a way that should not happen.
The other red flags have to do with risk and adverse events. There is a particular concern to be careful and cautious if the research study involves any risky procedures — if it’s a study that involves an invasive procedure, like a liver biopsy or bronchoscopy. Someone else needs to be monitoring the safety, not the person whose own patient it is. That person should not be making the decision. It’s putting them in a role conflict. That becomes only more so if there is an adverse event. In clinical research, when there is an adverse event, sometimes a decision has to be made. “Do we pull this participant out of the study?” That should happen independent of the person who is both the researcher and the clinician.
GT: Are there other ways that informed consent differs for genomic studies?
Burke: Genomic studies — it’s a different kind of risk. Like any study, you determine the risk based on what’s going to happen. What’s different about genomics is that you have the risk of information. That’s something that we’re not used to thinking about. Once you know something, you can never unknow it.