During two proteomics sessions at last week’s PITTCON conference in Orlando, Fla., scientists addressed how multi-dimensional separations can help researchers get better resolution from the enormous amounts of proteins present in biological samples, and how they can better understand protein function by analyzing subcellular organelles and the locations of proteins within cells.
“Human cells may express up to 20,000 proteins at any time,” said Haleem Issaq, a researcher for the National Cancer Institute, during his presentation at PITTCON. “That poses analytical difficulties. How can you resolve all those proteins? The answer is, divide and conquer.”
There are a slew of ways to “divide” up proteins using a combination of centrifugation, gel electrophoresis, capillary electrophoresis, and high performance liquid chromatography, Issaq said.
The simplest way of separating biological fluids is by gradient, through centrifugation, Isaaq said. However, this can only separate most samples into about 10 different portions.
The principle of multi-dimensional separation is that each one of those fractions can then be subjected to another form of separation, and the resulting fractions, now in the hundreds, can be subjected to yet another form of separation.
One of the biggest challenges with multidimensional separations is the amount of time that it takes, with each fraction having to be loaded and processed within a separation system, such as an HPLC.
Peter Carr, a professor in the department of chemistry at the University of Minnesota, has been using high-temperature liquid chromatography, or HTLC, to confront the problem of separation speed.
The total analysis of a two-dimensional separation is equal to the number of first dimension “slices” multiplied by the time it takes to do each second dimension separation, Carr pointed out. Therefore, anything a scientist can do to reduce the amount of time it takes to do the second dimension of the separation is going to gain a tremendous factor in speed.
Some methods for reducing the amount of time it takes for a sample to be separated include using shorter columns, using monolithic columns which contain a porous material that allows for faster separations, and using high temperature liquid chromatography.
Carr announced towards the end of his talk last week that he and his research team have managed to reduce the amount of processing time for each second dimension cycle down to 21 seconds. This allows the typical sample to be processed in less than an hour — much faster than the average 2D HPLC, which takes about 10 hours, Carr said.
The most important factor in reducing the speed of the second dimension is temperature, Carr said. However, he offered a caveat to using high temperature liquid chromatography, which can subject samples to temperatures up to 100°C: The analyte must be thermally stable.
Having said that, Carr then pointed out that in some cases, thermally unstable compounds can still undergo HTLC because with the speed that the compounds are processed, there is not enough time for the unstable compounds to break down. Carr said he has successfully subjected cyclic lactams, which are notoriously unstable compounds, to HTLC.
“You need to develop and optimize each dimension of separation,” said Carr.
Aside from using high temperatures, Carr has also reduced the amount of time for each second dimension cycle by reducing the amount of volume used to flush out columns after each analyte passes through.
“The old grade-school rule that you need 10 times the volume of the column to flush it out is not true,” he said. “If you do it right, you can use much less.”
By optimizing flushing, Carr’s research group has reduced the re-equilibration time for each analyte down to 3.6 seconds, Carr said.
To demonstrate the improvement in peptide resolution by going to two dimensions, Carr showed how one little peak that was present in a first-dimension, conventional HPLC separation was separated out into nine components by going through a second-dimension HTLC.
“We’ve seen immense improvement in separations by going to 2D,” said Carr.
Another way to attack the problem of colossal amounts of proteins is by analyzing a small subset based on their locations within cells. This approach was addressed during a PITTCON session last week entitled “Subcellular Proteomics.”
Several speakers during the session addressed the mitochondrial proteome, including Marian Navratil, a postdoc under Edgar Arriaga at the University of Minnesota’s department of chemistry; Bradford Gibson, a researcher at the Buck Institute for Age Research; and Steven Taylor, a researcher at Amylin.
The mitochondria is especially significant in aging, said Arriaga, the session chair and arranger, who spoke with ProteoMonitor after the presentations (See p. 7.)
“There are at least 300 diseases that are related to mutated mitochondrial DNA, and almost all of these mutations and diseases have similar features that one would see when people get old,” said Arriaga. “So it has been postulated that a lot of these mutations are present in tissues of aged people, or aged animals.”
Navratil explained during his talk that his research group is using proteomics techniques — in particular MALDI-TOF and iTRAQ — to analyze the differences between the proteomes of wild-type mitochondria and mitochondria that have mutations in their mitochondrial DNA. This is clinically significant because the mutation rate of mitochondrial DNA is 10 to 100 times higher than nuclear DNA, Navratil said, and, as noted by Arriaga, mitochondria with DNA mutations have been implicated in a number of conditions related to aging.
Gibson and Taylor also described proteomic techniques used to analyze the mitochondrial proteome.
The last speaker of the session, Robert Murphy, a professor at the University of Carnegie Mellon, described a different way of analyzing the subcellular proteome by using a database of fluorescent images to characterize the location of proteins within cells (see ProteoMonitor 9/3/2004,
Unlike Gene Ontology, which describes protein locations only through words, Murphy’s imaging method provides the “picture worth 1000 words”.
“Gene Ontology describes proteins as being located within the Golgi Stack, the Golgi Lumen, et cetera, and the questions that thatbrings up are, ‘Are [those locations] the same? How similar are they?’” Murphy said.
To address these questions, Murphy has developed a system of defining protein locations by subcellular location features such as morphological features, how close proteins are to the edge of an organelle, geometrical features, texture features, and a host of other SLFs. Murphy then builds cluster trees of proteins based on how similar their SLFs are. According to Murphy, fluorescent images, combined with SLFs, give a description of subcellular location that is more precise and discriminating than the human eye.
“We can use SLFs to measure similarities [in location],” said Murphy. “One significance of this is that we can now ask, ‘Does the protein change [location] pattern in response to drugs, oncogenes, etcetera?’ The data can be combined with expression proteomics.”
Murphy said he is currently working on scaling up his work of assigning proteins to high-resolution locations by analyzing scores of “blindly acquired” 3D images.