Skip to main content
Premium Trial:

Request an Annual Quote

FDA s Petricoin Sees Protein Arrays as New Element of Clinical Trials

Premium

AT A GLANCE

NAME: Emanuel (Chip) Petricoin

AGE: 37

POSITION: Co-Director FDA-NCI Clinical Proteomics Program, Senior Investigator, Center for Biologics Evaluation and Research, FDA Division of Therapeutic Products

PRIOR EXPERIENCE: Developed proteomics for cytokine signaling analysis at FDA

How did you start working with SELDI (Surface Enhanced Laser Desorption Ionization)?

Initially we decided to focus on some traditional proteomic platforms like 2D gel electrophoresis, and [asked the question], could we take laser capture micro dissected (LCM) material and actually profile that material on a 2D gel? [It was] pretty non-sexy, low-throughput stuff, but we felt we could develop methodology to go from tissue fixation preservation through laser capture on to a 2D gel. A year was spent basically just developing methods by which one could take human tissue specimens, perform LCM, and even do a Western blot effectively. We thought, let’s look at some good patient matched samples, look at 2D gels and actually see if we could a) do it in the first place, b) [answer] the questions how much, how long, how many shots are needed, and c) see if we could publish a paper showing that we could actually do it. That would be a good reward for all that work.

That’s what we first started doing, [but] because of the limitations of LCM — the amount of time, the scarcity of clinical material — we [started] looking at interesting technologies such as Ciphergen’s SELDI tool. [It] started cropping up in the late ‘90s as a potentially new way of looking at cancer and other diseases, with more of a non-traditional, high-throughput approach that we thought might require a lot less material than doing 2D gels. We started exploring the use of the Ciphergen system really early on, using it in a completely different way than the way we’re using it now. We actually were thinking about using the Ciphergen technology almost like an antibody-capture array, so that we could look at specific proteins in micro-dissected cells, or develop a molecular portrait of cancer in much the same way people [use] gene expression arrays. We wanted to see if we could generate a fingerprint of proteins on SELDI that could distinguish normal prostate epithelium from cancer prostate epithelium, and then distinguish prostate cancer epithelium that was very aggressive versus clinically indolent. We had the same type of methodological issues, and what we found was, yes, we could do that, and we published the first application of LCM and SELDI analysis for studying cancer progression and metastasis.

But, for the most part, it wasn’t really telling you anything more than what the pathologist could tell you already by looking at the cell directly. It was just another way of giving you pathologic lineage analysis. It certainly has utility in discriminating different types of tumors from each other, and maybe tumors of unknown origin, and maybe distinguishing bad actors [from] ones that are more indolent [among] the same tumor type. But it really didn’t pan out because you couldn’t actively identify what the protein peaks were on the SELDI chip. It wasn’t as much of a discovery tool as everyone thought. I think even Ciphergen would admit that right now most people are more interested in SELDI as a profiling tool, rather than a discovery-based tool, where someone sees a peak and then identifies what that peak is. We realized the adage “dig where the gold is” [and] with SELDI the gold is being able to perform high throughput proteomic fingerprinting very rapidly with limited amounts of material.

 

When did you start testing out the Ciphergen SELDI system?

In 1998, we started looking at it with LCM, basically looking at it as a proteomic profiling tool, [but] the problem was when we started looking at it that way, it really required a higher order analytical tool to mine that data. That wasn’t really available. It was a head-scratching quandary. It was like: “Wow, here’s something that gives us a proteomic fingerprint back in a few seconds. How do we mine this data? There’s a lot to look at here.” In a sense we were able to get around that issue when we linked up with Correlogic.

 

How did you hook up with Correlogic Systems?

It was actually complete serendipity, which often in science is what translates into actually doing something useful. I met the president of Correlogic, Peter Levine, through my wife. My wife and his wife are friends and we were at a dinner party and it was kind of like, “So, what’s going on in your life?” He started talking about the company, which was involved in pattern recognition for Internet fraud, looking at security systems, and looking at patterns of word usage for terrorists and things like that. It was very outside the scope of medical science. But as he was describing to me the way that their methodology parsed and looked at data, we started talking about biological applications. [I asked], “Could you do this, could you do that?” and he said, “Yeah, yeah.” So I said that we should really start looking at this SELDI data together to see whether we could find anything there. That led from one thing to the next. Lance and I had looked at some pattern recognition principles, [but] the problem was that there are so many different ways to skin the cat. There are tons of supervised learning algorithms, and tons of unsupervised learning algorithms, and each has its limitations. The interesting thing that struck a note to me with Correlogic was they actually had a system in place that was a combination of many different bioinformatics tools.

 

You were initially using commercially available protein arrays. Why create your own?

In the past, a lot of the array-based approaches have immobilized antibodies or aptamers onto a planar array, let that incubate with a lysate from a diseased or normal [sample] and then look at the difference [between the two]. That’s really difficult to do with clinical material because the problem with those technologies — and really the problem with most proteomic technologies — is that they require a fair amount of cellular input to get at the low abundance proteins. These technologies that everyone is using — ICAT, multidimensional liquid chromatography going into mass spec technologies — we would utilize them if we could. The problem is that we can’t utilize those types of technologies when you only have a couple hundred cells of biopsy specimen. There’s too little protein to do anything with. You can throw out all of those standard proteomic technologies, or even the ones that are on the leading edge of proteomic science.

What we did was, we [made] arrays that are comprised of the micro dissected cells themselves. We call these reverse phase protein arrays. A lot of companies are really interested in licensing this technology from the government now. Only a few cells need to be micro dissected, and that lysate can be applied to dozens and dozens of arrays, and then you can look at many different outcomes from a limited amount of biopsy sample. That’s why for the first time we can look at patients before, during, and after therapy, take a small piece of biopsy and look at hundreds of signaling events simultaneously from that small piece of material, and we only need to LCM a few cells to do that. That’s really had a huge enabling impact on our ability to look at things. We actually array all of our lysates in miniature dilution curves, which makes it so that whatever analyte you’re looking at, you’re always in the linear dynamic range, which is very important. That concept itself a lot of people are interested in potentially licensing. We can specifically look at lots and lots of signaling pathways simultaneously. We have the patient before treatment, after drug treatment, and then we’ll have the responders and non-responders, and then we can actually say how are these pathways changing in response to the drug, and how did that correlate with response and non-response. That’s our first phase, information gathering, and then our next phase will be to actually start choosing patient therapy based on the up-front diagnostic proteomic profile, and then monitoring in real-time during the clinical trial the response that patient is having. The third phase, hopefully, if all that shows promise, is to then potentially even change the therapy depending on whether or not the patient’s response rates by proteomics endpoints are changing. That’s how [NCI Director] Carl Barrett would like to see all the clinical trials at the NCI done, especially with molecular-targeted therapeutics as we get more and more of them.

The Scan

Fertility Fraud Found

Consumer genetic testing has uncovered cases of fertility fraud that are leading to lawsuits, according to USA Today.

Ties Between Vigorous Exercise, ALS in Genetically At-Risk People

Regular strenuous exercise could contribute to motor neuron disease development among those already at genetic risk, Sky News reports.

Test Warning

The Guardian writes that the US regulators have warned against using a rapid COVID-19 test that is a key part of mass testing in the UK.

Science Papers Examine Feedback Mechanism Affecting Xist, Continuous Health Monitoring for Precision Medicine

In Science this week: analysis of cis confinement of the X-inactive specific transcript, and more.