Skip to main content
Premium Trial:

Request an Annual Quote

UPitt s Andreas Vogt on High-Content Analysis in an Academic Setting

Premium

At A Glance

Name: Andreas Vogt

Position: Research assistant professor, Department of Pharmacology; Associate director, Fiske Drug Discovery Laboratory, University of Pittsburgh

SAN FRANCISCO — As high-content assay technology continues to mature, so too has the uptake of its use in academic screening laboratories increased. Once thought of as strictly an industrial-scale drug-discovery tool, research labs are beginning to understand how they can use high-content cell-based assays to probe the action of small molecule compounds in drug discovery and functional genomics. One such prominent lab is the University of Pittsburgh’s Fiske Drug Discovery laboratory, headed by Andreas Vogt. Vogt, who gave a presentation here last week on his lab’s use of high-content methods, sat down for a few moments with Inside Bioassays to provide a bit more color.

How did the Fiske drug discovery laboratory evolve, and how did you become involved with it?

The Fiske Drug Discovery Laboratory is a satellite lab of John Lazo, who used to be the chairman of pharmacology at the University of Pittsburgh. A private party donated that laboratory, and it was a small 400-square-foot facility in the same building in which Pitt’s department of pharmacology is located. It became pretty clear that the laboratory was going to be dedicated to high-throughput types of applications, and then it became clear that it would become high-content analysis. The reason for that is John Lazo served as a board member for Cellomics at that time — he doesn’t do that anymore. That was in the late 1990’s and that’s when I joined Lazo’s group.

You were already at the University of Pittsburgh?

I was already there, but with another researcher for my post-doctoral work. But that researcher left, and John Lazo recruited me — in part to head up that drug-discovery lab and to build it into something where we could employ technologies that we otherwise hadn’t used before.

You said that it became clear this would be dedicated to high-throughput applications — that’s not something a lot of academic centers do. What do you think was the driving force behind this?

My interest in drug discovery goes back 15 years or so, and Lazo’s interest goes back pretty far, too. And at that time, there were already projects going on at the university related to drug discovery. But high-throughput screening at that time — maybe 1996 or 1997 — wasn’t really all that accepted, and in academia, it wasn’t really accepted at all. It wasn’t until the last year or two that it’s recognized that screening may have a place in academia, and the negative connotation was being removed. So when we started this, nobody was screening. We were one of the first [academic] groups of people to do target-based drug discovery. Now I have to go back 10 years further, because academics had always been working on interesting targets and proteins in the cell, from a biological standpoint, to understand what they were doing. But in the early 1990’s, more people began to want to exploit that for therapeutic intervention.

Why was high-throughput screening not always accepted in academia? What are the challenges associated with doing it in an academic environment?

There was clearly an anti-screening sentiment, because it was viewed as anti-intellectual. A lot of academic research that is done is supposed to be hypothesis-driven. And in screening, if you look at some papers, it will sometimes be called hypothesis-free or unbiased, and in the early days that would have been called fishing and using unguided approaches. Those were actually some of the statements that were in grant reviews that we would get. But it’s not so. Our challenge as academics is to not just play the numbers game, but to develop new targets, new approaches, or utilize this in a smarter way — developing reagents and developing methods so that we can answer questions that we’re interested in. I think that pushed the whole thing a little bit over, even the perception, when people realized that we could phrase it that way. That made a big difference. The hypothesis now is that you can actually use small molecules to interrogate biological systems. That’s not fully proven yet, because it’s still ongoing. All of this high-content and high-throughput screening in many settings is just viewed as a means to test that type of hypothesis. And the difference is that small molecules may hold great promise as biological tools. We always seem to think about therapeutics and new cancer medicines, but there is also tremendous value here as a tool to interrogate biology. It’s different from genetic approaches, because it’s immediate, it’s reversible, and you don’t get adaptive responses of cells, like you do even with siRNA — who knows that it’s not due to some secondary effect? So small molecules, presumably if they were specific enough, would allow you to very precisely probe the function of one protein. A good example of that is the MEK inhibitor PD98059. You hear that all the time. It’s a kinase inhibitor that disrupts a specific pathway. It was developed in 1995, and since then, 6,000 publications have used that as a reagent. And the conclusion of all those papers is the same — that particular pathway and target is necessary for whatever function they’re investigating.

Your lab was one of the first to develop high-content screening technology, with a Cellomics instrument. What was the value you saw in this high-content screening technology then, and has that changed at all?

That’s an excellent question. The ArrayScan was brought into the lab via a joint grant we had with Cellomics at that point. It was entitled “Smart Assays and Libraries Development,” or something similar. Back then, I didn’t fully realize what it was going to do for us. I thought it had potential, but I couldn’t fully explain what that potential was going to be. It turned out to be … not quite unproven, but the statements have changed a little bit over the years, and the utility of HCS — we learned a lot as we went along as early adopters. I’ve always viewed this as a tool to develop new assays or new technologies so I could tackle some problems that I otherwise would not have.

How is your lab using high-content analysis in your research now?

I’ll tell you about some of the stuff that is going on right now, which is what my presentation here is on. The lab that I’m working with has interest in a sub-group of phosphatases in the cell, and phosphatases are the ones that take phosphates off of proteins and counteract kinases. And there is a specific sub-group of phosphatases called dual-specificity phosphatases that take off tyrosine and threonines on the same protein substrate. There are two prominent examples: Cdc25 cell cycle phosphatases and the Map kinase phosphatases. Both of these are involved in cell cycle regulation, proliferation, survival, or apoptosis — stress responses. Phosphatases are not very prominent targets, so the kinase field is probably about five to 10 years ahead of the phosphatases. One of the reasons that I have found, and it’s just a personal opinion, is that the phosphatases are a lot harder to tackle. I picked the Map kinase phosphatases because there were technical and intellectual challenges with that protein. MKP-1, in particular, was discovered 10 years ago, but because it dephosphorylates a protein thought to be involved in survival, everybody thought it might be a tumor suppressor. So that’s how people looked at it. But that was never formally proven, and I think it was possibly overlooked as a target. In addition, it was really hard to make. It couldn’t readily be employed in in vitro screens, because you can’t make a lot. And there were no good assays for it in the cell or in the cellular context. That’s because it is such a complicated biological system, that if you use the phenotypic readout of protein phosphorylation for its substrate, Erk, that could be anything. That could be so many things. So what I used the HCA for is to develop a definitive assay that measures target readout in two sub-populations: The ones that have the target over-expressed, and the ones that don’t. We do that by transient transfection in the same well, and then we include chemicals and try to evaluate whether these will override the effect of the target protein. The magnitude of that differential response will tell you if that compound is active or not. That is novel, and that’s sort of the niche that I’ve been going into — trying to get away from the phenotypic readouts, the simple phenotypic screens, and trying to harness the power of the single-cell analysis for definitive assays for targets that are hard to tackle.

Have you used or evaluated any other technologies for high-content analysis besides the Cellomics platform?

No, Cellomics has been it. That has a lot to do with the financial situation at universities. We’ve had this long-standing relationship with Cellomics. We helped them out in the very beginning, when nobody knew what this high-content screening was going to be useful for — including the people at Cellomics. They were learning as they went along, making numerous upgrades to the technology, and they needed to get some credibility, and we provided that through publications with them. And of course, they are in Pittsburgh, so they were right across the street. And as I said, we had a formal relationship — which we don’t have anymore. But that’s how we learned how to use high-content screening — with the ArrayScan — and there was really not any need at that time to investigate an alternative platform that does the same thing.

Now, that doesn’t mean that there aren’t technologies out there that would have different capabilities and fill different needs, but they are not quite as advanced yet. I can give you some examples of ways HCS can be different from the ArrayScan or similar instruments. One development of note is Vitra’s CellCard. That is one that increases throughput a lot, with the multiplexing capabilities. Another that is not available is called LEAP, from Cyntellect. I would have loved to help out — I think we put a proposal in, but it didn’t get funded. That’s one that would manipulate individual cells, and that was something that one might consider in the future. These are the two things that really come to mind. All of the other platforms, as I see it, are competing for the same pool of customers with a little bit of a twist here or there.

The other area where I think improvements need to come in is in data analysis. While that may not be the domain of the high-content providers, it may be that of other people that will pick up the ball and run with it. Some companies have very different philosophies on how they go about the analysis of their data sets. Some people say that we don’t really need so much. Of course, the big companies will always develop tools that the customer wants, and it’s a very interactive, iterative process between the high-content screening vendor and user.

What’s next for the Fiske drug discovery lab?

I’ve said that we’re trying to carve niches, and be smart, and do hypothesis-driven research, but the plans are that we will expand. We will go into larger numbers of compounds, and one of the reasons is that we have this great collaboration with the chemistry people at Pitt. They make libraries — sub-libraries, focused libraries, et cetera. Because there is so much interest from the chemists, we already have 100,000 compounds or so, and Lazo is getting a whole institute, as a matter of fact, for drug discovery. So we will have automated sample storage, maybe 200,000 compounds, and much stronger capabilities to get better lead structures that we can then pass on to the chemists. That new building is going to be finished, they are now projecting, in the fourth quarter of 2005.

 

The Scan

Review of Approval Process

Stat News reports the Department for Health and Human Services' Office of the Inspector General is to investigate FDA's approval of Biogen's Alzheimer's disease drug.

Not Quite Right

A new analysis has found hundreds of studies with incorrect nucleotide sequences reported in their methods, according to Nature News.

CRISPR and mRNA Together

Time magazine reports on the use of mRNA to deliver CRISPR machinery.

Nature Papers Present Smartphone Platform for DNA Diagnosis of Malaria, Mouse Lines for Epigenomic Editing

In Nature this week: a low-cost tool to detect infectious diseases like malaria, and more.