Skip to main content
Premium Trial:

Request an Annual Quote

Framingham Study Director Wants Proteomic Researchers' Help on Biomarker Work

Premium

Daniel Levy
Director
Framingham Heart Study, Center for Population Studies, National Heart, Lung, and Blood Institute
Name: Daniel Levy
 
Position: director, Framingham Heart Study, Center for Population Studies, National Heart, Lung, and Blood Institute
 
Background: MD, Boston University School of Medicine, 1980; medical officer NHLBI, 1984 to present; director cardiology laboratory Framingham Heart Study, 1985 to 1990; director medical clinic, Framingham Heart Study, 1986 to 1994; director Framingham Heart Study, 1994 to present; associate clinical professor of medicine, Harvard Medical School, 1994 to present; professor of medicine, Boston University School of Medicine, 2003 to present.
 

 
Last month, the National Heart, Lung, and Blood Institute announced it was seeking partners to promote research into proteomic biomarkers linked to cardiovascular disease and related risk factors.
 
The immediate goal is to create a diagnostic test to screen people at risk for certain kinds of heart disease, with a long-term goal of finding therapeutics that can treat and prevent the disease.
 
In its announcement, the NHLBI said it would be critical to implement methods to measure large numbers of biomarkers with minimal sample volumes and cited proteomic technologies as a platform that could be valuable in achieving its goals.
 
This week, ProteoMonitor spoke with Daniel Levy, who is coordinating the NHLBI effort, about the role of proteomics in the initiative.
 
Describe what this initiative is and what it’s seeking to do.
 
First, a little background on the Framingham heart study: It’s one of the nation’s oldest and longest-standing cardiovascular projects, and because it’s been going on for so long and the population has aged, we’ve collected additional information far beyond the cardiovascular system into things like lung disease, and osteoarthritis, and osteoporosis, and hearing disease, and eye disease, and even cancer research. So it’s a very, very, extensive database now encompassing three generations of participants.
 
How old is the Framingham study?
 
It started in 1948. It’s a very well-established study that’s been going on for a long time with rather extensive characterization of three generations within families. The first generation started in 1948, the second generation in 1971, and the third generation in 2002.
 
Beginning this fall, we initiated a genome project for Framingham that will genotype over a half-million individual SNPs in 10,000 study participants, in total about 5.5 billion genotypings, making it one of the largest genetic projects in a general population. And our aim is to make that information a general resource for the entire scientific community by sharing it promptly and by working with the National Library of Medicine and the National Center for Biotechnology Information to make this resource available to as many people as possible because it will be a huge amount of information.
 
[This project] is designed to add another element of information and opportunity to the Framingham data. And that is measuring biomarkers in about 7,000 study participants in the second and third generations, and in doing so, learning about the biochemical signatures of important diseases and being able to relate biomarkers to genetic variation to disease, having all aspects of that.
 
We already will have the genetic biomarkers from the genome project I just described, and we are now hoping to get circulating biomarkers. I’m including protein biomarkers in that category, though some of them will be protein biomarkers, some of them may be more metabolomic or lipomic markers. The other opportunity here, if someone has expertise, is informatics.
 
Is cardiovascular disease and heart disease the first extension of this project?
 
The areas that we are starting with include atherosclerosis and individual measures of atherosclerosis in one grouping of traits. And the other is what we call metabolic syndrome and its components in the second group. So relating biomarkers to obesity, diabetes and insulin resistance, high blood pressure, [and] dyslipidemia would be an important category of traits.
 
What role does proteomics have in this project?
 
Proteomics provides the opportunity to measure very large numbers of circulating biomarkers, and improvements in technology may allow for quantitative or reasonable quantitative assessment of the plasma proteome…allowing in minute amounts of specimen [the detection of] hundreds of potentially interesting biomarkers.
 
Conventional proteins, right now, are measured typically through immunoassays, things like ELISA, and there’s a limit to how many markers you can measure on ELISA. Right now, there’s a limited number of existing monoclonal antibodies for ELISA-type reactions. And the second [problem] is there’s a limit to the ability to multiplex and miniaturize that process so there’s a limit to how many biomarkers you could make out of, let’s say one milliliter specimen.
 
Proteomics allows you to measure many more biomarkers, and you don’t have to have antibodies to do so, so it skips one step in that process. Historically, the problem with proteomics was difficulty being quantitative.
 
 
That leads to my next question, which is what are the challenges of finding biomarkers specifically for heart disease?
 
There are two issues. The first is [that] measuring large numbers of biomarkers can include biomarkers that we already suspect to be involved in the disease because of the pathways they reside in. Another opportunity, though, with such a project is to identify novel relationships, and those novel relationships, in turn, may point to new pathways, and new pathways, in turn, may represent new therapeutic targets.
 
One thing we expect to come out of this project in the short term is a diagnostic test that could be used to identify high-risk people for these diseases. In the longer term, we would hope that new therapeutics might come out of this.
 
Do you have any timetable for a diagnostic?
 
It’s hard to know. Obviously, it will depend on the laboratories doing this, the throughput, the expertise of an analytical core, a funded center for doing the statistical analysis, and also the availability of bioinformatics to make all of this information accessible in a database.
 
So these are all things that are necessary and have to come together in order for this to be effective. Working with groups that have expertise in all of these areas will be important elements in putting together a team that can be effective.
 
Is there any way to characterize where we are in terms of proteomic research into heart disease?
 
There’s a great opportunity here of developing new diagnostic tests and also identifying therapeutic targets that could lead to treatments in the future. That’s my hope.

File Attachments
The Scan

CDC Calls Delta "Variant of Concern"

CNN reports the US Centers for Disease Control and Prevention now considers the Delta variant of SARS-CoV-2 to be a "variant of concern."

From FDA to Venture Capital

Former FDA Commissioner Stephen Hahn is taking a position at a venture capital firm, leading some ethicists to raise eyebrows, according to the Washington Post.

Consent Questions

Nature News writes that there are questions whether informed consent was obtained for some submissions to a database of Y-chromosome profiles.

Cell Studies on Multimodal Single-Cell Analysis, Coronaviruses in Bats, Urban Microbiomes

In Cell this week: approach to analyze multimodal single-cell genomic data, analysis of bat coronaviruses, and more.