Skip to main content
Premium Trial:

Request an Annual Quote

Georgetown University Medical Center, Proteome Sciences, ISB, NYU

Premium
Proteomic, Genomic Interpretations ‘Simplistic’ and ‘Misleading,’ Study Finds
 
Data generated by genomic and proteomic technologies are overly complex, causing some cancer researchers to arrive at “simplistic” or “misleading” conclusions, according to a review article in the January issue of Nature Reviews Cancer.
 
The study, led by researchers at Georgetown University Medical Center, concluded that scientists “don’t appreciate how complex the data is that is being generated,” ProteoMonitor sister publication GenomeWeb Daily News reported last week.
 
High-throughput genomic and proteomic tools “have allowed us to see that nature is more complex than we thought, and while we don’t yet know what the overarching biological rules are — such as the interrelationship between multiple signaling pathways that can lead to cancer development — we are trying to play the game like we do,” lead author Robert Clarke, professor of oncology and physiology and biophysics at GUMC’s Lombardi Comprehensive Cancer Center, said in a statement released last week.
 
“The answers to our questions are probably there in the data, but the issue is whether we can get them using these complex tools and, also, how we will know they are right when we see them,” he added.

Clarke, who is also interim director of GUMC’s Biomedical Graduate Research Organization and co-director of the school’s Breast Cancer Program, led the analysis with six other scientists from Georgetown and from Virginia Polytechnic Institute.

 
In the statement, GUMC said researchers like Clarke are currently studying ways to “understand the theory and properties of the data” generated by genomic and proteomic tools and “how they may affect data analysis and interpretation.”
 
At the core of the challenge is that in the clinical evaluation of cancer, the “thousands of active molecules” that exist in a single excised tumor sample produce “very high-dimensional data spaces.” As a result, researchers face “10,000 or so dimensions, if you consider a molecule working along a pathway as a dimension.”
 
Clarke uses the analogy of a box, which has a height, a width, and a length. But if you add color and fiber you add two dimensions, he said. “There are countless things going on in a cell that could describe it; this is the essence of multi-dimensionality and these tools tell you all of that.”
Not all of these data will be relevant to the research that yielded them. “Some cells in a tumor are dying, some are not. Some are growing, others are not. Some are trying to spread and the rest aren’t,” Clarke said. “Everything is going on in a tumor at once, and all of these activities require coordination of different genes. So it may not be accurate to analyze these molecules as if they are all focused on performing a single function.

“We need to discover what specific genes perform which function,” he said. “If we knew the rules” — which genes participate in which process, for instance — “we should be able to understand some of the questions we have, but we are not there yet.”

 

 
Proteome Sciences Drawing Licensing Proposals for Technology
 
Proteome Sciences said last week it has received draft license proposals for its isobaric tandem mass tags technology from a number of “major” companies, and a proposal is being finalized to “conclude an exclusive license agreement “ for the technology.
 
As a result, license negotiations may not be finished until early this year, rather than the end of 2007 as the company had previously anticipated.
 
The company received the US patent for the technology in November providing it a potentially significant source of revenue. Proteome Sciences already had been granted patent rights in parts of Europe, Australia, New Zealand, and Canada.
 

 
ISB, NYU Researchers Develop Genetic, Proteomic Model for Cell Response
 
Scientists at the Institute for Systems Biology and New York University have developed a model that can characterize and predict how a free-living cell responds on the molecular level to genetic and environmental changes.
 
The model, called EGRIN, for Environmental and Gene Regulatory Influence, used data from genome-wide binding-location analyses for eight transcription factors; mass spectrometry-based proteomic analysis; protein-structure predictions; computational analysis of genome structure; and protein evolution.
 
The results of their study may enable researchers to perform “more complex” genetic engineering with “fewer unintended consequences,” the scientists said in a statement.
 
Writing in the online edition of the current issue of the journal Cell, the researchers showed that EGRIN was able to link biological processes with previously unknown molecular relationships. They also showed that it could predict new regulatory powers of know biological processes as well as how more than 1,900 genes respond to “novel” genetic and environmental experiments.

The Scan

NFTs for Genome Sharing

Nature News writes that non-fungible tokens could be a way for people to profit from sharing genomic data.

Wastewater Warning System

Time magazine writes that cities and college campuses are monitoring sewage for SARS-CoV-2, an approach officials hope lasts beyond COVID-19.

Networks to Boost Surveillance

Scientific American writes that new organizations and networks aim to improve the ability of developing countries to conduct SARS-CoV-2 genomic surveillance.

Genome Biology Papers on Gastric Cancer Epimutations, BUTTERFLY, GUNC Tool

In Genome Biology this week: recurrent epigenetic mutations in gastric cancer, correction tool for unique molecular identifier-based assays, and more.