Skip to main content
Premium Trial:

Request an Annual Quote

Tomas Lundqvist of AstraZeneca on Solving Structures More Efficiently

Premium

At A Glance

Name: Tomas Lundqvist

Position: Associate director, Structural Chemistry Laboratory, AstraZeneca R&D, since 1996.

Associate professor of molecular biology, Swedish University of Agricultural Sciences, Uppsala, since 1996.

Background: Senior scientist, project manager, and group leader, Pharmacia and Upjohn, Stockholm, 1992-96.

Post-doc in structural studies, Medical Research Center, Cambridge, UK, 1991-92.

Post-doc in R&D, E.I. Dupont de Nemours, Wilmington, Del., 1990-91.

PhD in molecular biology/protein crystallography, Swedish University of Agricultural Sciences, 1990.

MS in agriculture and biotechnology, Swedish University of Agricultural Sciences, 1986.

 

Tell me about your work at AstraZeneca in high-throughput structure determination.

We’re not normally thought of as being high-throughput when it comes to genomics programs. [High-throughput is normally where you’re] looking for a not-very-large number of genes [and] you try to solve as many structures as you can, allowing for quite a high failure rate in what you do and with what works. And that’s a pattern that favors robotics and all these technologies that have been developed. Whereas in industry you have your targets that have been selected and validated and you have to go with those — and preferentially from the human species, which is what is of interest. And you cannot be veered from that. So what we have to do is just become more with it in handling proteins and the way they are. But in doing so and doing it well, I think we have success rates of about 80 percent in what we do. And if you compare that to the structural genomics efforts, which tend to land at around 5 percent or worse, it’s a difference. So we mean different things when we say high-throughput. We’re happy if we can deliver structures in a timely manner in[to] the projects before their chemistry plans get too involved. And the impact of structural information is very much correlated when you deliver it in the project.

So the high failure rate with structural genomics efforts has to do with the way in which people are trying to determine structures so quickly and high-throughput?

By limiting yourself to what robotics can do and what can be streamlined, you have to [limit the] type of experiments you can do with each protein. So a huge chunk of work for us is physical characterization of the protein that’s been produced — that is, finding the conditions that are optimal for handling them in terms of formulations and stuff, and that’s very tough to do in a high-throughput manner. Particularly since you might have to do the optimization already at the expression stage — to get the protein to fold happily during the expression and extraction from the cell, throughout the whole handling. Some of them might need ligands to stay happy through that process. And [in that case you’re] at a level where you can’t do it in high-throughput at all — you can’t identify ligands and conditions for all of the proteins you’re dealing with. But having said that, we’re trying to become more high-throughput in that stage as well and do it in a more systematic way.

How are you trying to become more high-throughput?

By [using] robotics, but the right type of robotics, I guess. One focus is to achieve the biophysical characterization of the protein in a sort of a high-throughput manner — to validate that they are folded and nonaggregating and at optimal conditions before entering crystallization. We do have robotics for doing high-throughput crystallization, but we try not to get too carried away ... but rather make sure that when the proteins enter the crystallization [step] they are in optimal shape. I think too many people concentrate solely on the crystallization step. The sad truth is that if you don’t find crystals under the first 500 conditions you’re not likely to find any under the next 1,000 or 10,000 conditions. Then you’re far better off going back to reformulation of your protein, exposing it to different ligands, buffers, detergents, and various salts and measure activity and stability, and then take them through another set of unbiased screening.

So you think it’s a matter of putting in the time at the front-end so you don’t go through the whole process without having the optimal conditions?

Yeah — we don’t try to solve problems in product formulation by crystallization; we try to solve it by characterization of the protein prior to crystallization, in order to have a much higher success rate in the crystallization itself. Sometimes you can by being lucky compensate for the [problems] in your formulation by your crystallization conditions. And if so, and that’s what people have done in the past, if you know through the crystallization screening that your protein likes acetate, and then you reformulate your protein in acetate and run it through the screen again, all of a sudden you have hits all over the place. So we try to be more upfront in how we do things and get more out of [each of] our constructs.

What role does in silico modeling play in the whole process?

I think you can improve your chances for obtaining crystals by playing with quite conservative mutations —for example by replacing certain amino acids. Lysine turns out not to be very good for crystal packing. You’re much better off using arginine. The distribution of those amino acids varies between species. So for example, it’s always been very difficult to crystallize protein from Helicobacter pylori because they have a high frequency of lysines, compared to, for example, E. coli, which tend to favor, in the same positions, arginines. All the correcting for that can of course be more easily done if you have access to a homology model — then you can predict which residue is going to be on the surface, and check by bioinformatics tools if they are conserved or not. Non-conserved residues are of course more attractive to mutate. Some companies do this more systematically and have had great success in getting more and better crystals. This is important since for most experiments you are in much better shape if you have access to several crystal forms from which to choose. Some of them will be more attractive for carrying out your experiments with than others. It might have to do with the crystallization condition: salts, and pH, but also access to exit sites can vary in different crystal forms, of importance when you are trying to soak in your ligand. So by going that extra mile to obtain more than one crystal form, and by trying to stick with conditions which are more compatible with ligand studies, you’ll dramatically increase your chances of being able to follow up not least weakly binding compounds that you, for example, obtain from virtual screening or fragment based approaches. That’s definitely one of our biggest upcoming challenges: to follow up compounds which are smaller and have much weaker binding than we’ve been used to in the past.

Are there any cases where it would be acceptable to completely replace the crystal-based structure determination with virtual modeling?

It depends on what questions you’re going to ask. If you’re going to predict binding from the model, I would say, it’s pretty far off still. If it is for assisting in library design — just getting the general properties of active sites, for example, — then it might be very useful. So then it all depends on the level of detail you need for your prediction. And another area which is tough to handle by modeling is protein flexibility and predicting which conformations are possible and so forth. That’s another thing where I argue it’s good to get a diverse set of experimental structures also to support modeling by giving some more insight into the flexibility of the protein, by providing a large number of starting models showing different conformations of the active site.

How did you get into drug target and structure determination work?

It started all the way back in my PhD. I did a post-doc at Dupont in the US, and got involved in a drug design project, and I got sort of hooked. So even though I did other academic post-docs in between, when I got an opportunity to get back into industry, I liked the environment around drug discovery in industry — that you have such wide, broad projects where you meet all kinds of people, and you find your structure being much more looked at and much more used than you would in a normal academic setting, where you and a couple other people tend to be the only ones interested in your structure.

I started off as a molecular biologist [looking at mutations], but I found that the mutations were very hard to interpret in the absence of structural information. So then I decided to combine the two and go into the world of structures as well. And that background has been very useful because now the two are so integrated, and the quick way of getting into protein structures is to do a lot of supplemental biology.

Anything else to add about high-throughput structure determination?

I think the term high-throughput should be used with care, because it means different things to different people. What can be seen as h-t in one area can be very low in others. So we try to not use high-throughput. We’ve always tried to focus on high output instead and now we’re trying to gradually change that into high impact. Because it’s really not really about how many structures you solve, rather if you solve the important ones — the ones that have high information content. Needless to say that solving 50 structures of the same target using similar ligands is less valuable than solving 10 diverse ligand structures of five different targets.

The highest impact is of course when you solve the structure for the first time but almost equally important is when you start to get an understanding of how things bind and interact with the binding site. Of course you learn soemthing from every subsequent structure, but you learn less, diminishing the return of your effort for each structure you solve on a specific target. We tend not to solve too many ligand structures that modeling can predict [do] equally well. It’s when you run across a compound whose activity you can’t explain by modeling, that you need to solve another structure. So in the later stages of projects it becomes more or less reality checks to validate that the modeling is still going ok. You’re far better off focusing your resources on new challenging targets than solving an incredible amount of ligand structures. How quickly we can solve the structure of the new targets preferntially with novel fragments/ligands bound is the key to continous high impact of structural information in the drug discovery process. This is of course a big challenge since the turnover of projects in the pharmaceutical industry is very high, but we can’t afford not to deliver the key structural information in the early part of the project if we want to remain a strategic tool.

The Scan

Fertility Fraud Found

Consumer genetic testing has uncovered cases of fertility fraud that are leading to lawsuits, according to USA Today.

Ties Between Vigorous Exercise, ALS in Genetically At-Risk People

Regular strenuous exercise could contribute to motor neuron disease development among those already at genetic risk, Sky News reports.

Test Warning

The Guardian writes that the US regulators have warned against using a rapid COVID-19 test that is a key part of mass testing in the UK.

Science Papers Examine Feedback Mechanism Affecting Xist, Continuous Health Monitoring for Precision Medicine

In Science this week: analysis of cis confinement of the X-inactive specific transcript, and more.