Skip to main content
Premium Trial:

Request an Annual Quote

ETHZ's Jörg Stelling Says Synthetic Biology Needs Computational Methods


Jörg Stelling
assistant professor of bioinformatics
Swiss Federal Institute of Technology
Jörg Stelling, assistant professor of bioinformatics at the Swiss Federal Institute of Technology in Zürich, Switzerland, is among a handful of researchers developing computational tools to support synthetic biology.
Synthetic biology requires an understanding of the same pathways, networks, and other subcellular systems that are driving the field of computational systems biology, but with an entirely different goal. While systems biology is essentially dissecting complex biological systems to understand how they work, the goal of synthetic biology is to design biological components and systems from scratch.
BioInform recently spoke to Stelling about the role of computational methods in advancing this relatively new subdiscipline. 
What role does computation play in synthetic biology now, and how do you expect that to change as the field moves forward?
Today many of the projects do not do too much computation, but the idea of the entire field is to bring engineering into biology and enable similar developments that we have seen in other domains – for instance in engineering complex technical systems, where computational designs and computer simulations are playing an increasingly broader role.
So I think this is where the field will be heading, but of course the main difference between traditional engineering and what this field of synthetic biology is trying to achieve is essentially we cannot really define all the parts. We don’t know all the properties and we have to invent the systems that are to be designed for the cell, which we don’t really understand, so these are the additional challenges.
So the information isn’t really available yet to begin the computational design?
Not for everything, but many people are trying to do this. There are several building blocks, for example, for the computational infrastructure. One is the Registry of Standard Biological Parts at MIT [], which really tries to provide a catalog that is similar to a catalog for electronic components, where you have descriptions of parts of biological systems and then you can use this information to make your own design.
But, as I said, the level of characterization for these types of biological components is very poor.
Would you say it’s just a question of performing more experiments to gather that information, or are there tools lacking on the computational side?
I think this is more on the experimental side. For these characterizations, in terms of what we’re really lacking at the moment, we have this repository but no rational way, no computer tools really to use the information [and we have no] … computed designs and verification [that can prove] that your circuit will perform as predicted. So all these steps are currently lacking.
What’s currently being done to improve the situation?
There are several groups working in this area, especially in the US. At MIT there are several groups trying to develop new tools. Here in Europe there are so-called coordination actions that are really trying to build the infrastructure. But of course, this will only develop if industry also helps develop it in parallel. The current funding will never enable us to achieve the level of computation that you see in other engineering disciplines.  
Do you think the field has matured enough to attract commercial investment?
There are already several companies, especially on the technology side, that have been founded. Especially around some of the foundational technologies such as DNA synthesis. There is definitely interest by the big international computer companies. Microsoft has just announced a call for research projects in this area, specifically targeting the computational infrastructure. So currently companies like Microsoft are interested. The same for IBM.
How much overlap is there between the work that you do – with the goal of designing a biological system from the bottom up – and some of the simulation and modeling work that’s being done in the computational systems biology community?
We can see these two fields as two sides of one coin. Systems biology mainly is the analytical side. They are trying to reverse-engineer some systems, and synthetic biology is the attempt to create something new. And of course, they’re mutually dependent. So, for instance, a lot of the information that will be needed for synthetic biology to really work and design more complex circuits will be provided by systems biology approaches.
But this design aspect is not in systems biology, so this is something that we have to establish, and the corresponding tools.
How would you characterize the current level of cooperation or interaction between these two subdisciplines?
There is at least 50 percent overlap between the two communities. Many people who are in the synthetic biology field are also in the systems biology field, but it’s not the same vice versa.
What are your particular short-term goals for the field?
The more short-term goals are first to devise computational methods that allow for robust design of circuits. This is essential because in biology you don’t have the exact specification of your components, you don’t know how many other things they interact with in the cell when you connect them. The idea is to devise methods and theories that allow some robustness of the circuit.
Projects that are currently running, for instance, deal with establishing oscillators and so on that are robust. Of course, people have done this. The first application was in 2000, but it’s much harder to build a robust oscillator that you can tune, for instance, than to just build something that does oscillations.
I think we’re currently at this level, so we’re trying to make these really fundamental designs – things like oscillators, switches that we need to build something more complex.
But of course there are other people working on other aspects, for instance trying to reprogram stem cells.
Do you have a sense yet of how well these designs perform? Are you able to validate them experimentally?
Usually you do some type of dynamic mathematical model, and then try to see under which operational conditions this could work, under what performance specifications. Then the ultimate test always is just to build the circuit, but this is far from perfect, of course – the translation from the model to the real world. It’s not like, for instance, how you can build a new airplane and just test it.  
Do you have an idea about how long it might take for these tools to be more reliable?
I think it will be pretty soon, say the next five to 10 years. Because the field is developing rapidly. I have the impression that many people from engineering are getting into the field now, so they have this background of really making these rigorous designs.
So what would be possible in five to 10 years? Would an organism like a bacterium be designable by then?
Perhaps not completely, but I think something like an artificial cell to test new properties – I think that would be feasible.
One other aspect is that other groups are also trying to use DNA as biological components to grow new sorts of computers, so this could have an effect on that field. That’s really mid-term or long-term growth.
Is there anything else worth noting about what you’re seeing in the field right now regarding computational methods?
I think it’s a little bit unlike systems biology. This will not be mainly about methods for data storage or for data interpretation. The computational aspects will be different in terms of more detailed simulations, more of these design aspects. So these computational designed systems [will require us] to build pipelines that allow us to gather information that is available in parts, interpret them in a meaningful manner, and then use them as building blocks of new systems.  

So this may be one distinction between systems biology and synthetic biology. There are many aspects in systems biology in which computation is simply used to interpret massive amounts of data, and in design the task is different. Perhaps you may have the challenge of how to interpret massive possibilities of how to design a circuit, but there’s a forward direction. You don’t try to identify something, but to build something new, and this presents different problems.

File Attachments

Filed under

The Scan

Expanded Genetic Testing Uncovers Hereditary Cancer Risk in Significant Subset of Cancer Patients

In Genome Medicine, researchers found pathogenic or likely pathogenic hereditary cancer risk variants in close to 17 percent of the 17,523 patients profiled with expanded germline genetic testing.

Mitochondrial Replacement Therapy Embryos Appear Largely Normal in Single-Cell 'Omics Analyses

Embryos produced with spindle transfer-based mitochondrial replacement had delayed demethylation, but typical aneuploidy and transcriptome features in a PLOS Biology study.

Cancer Patients Report Quality of Life Benefits for Immune Checkpoint Inhibitors

Immune checkpoint inhibitor immunotherapy was linked in JAMA Network Open to enhanced quality of life compared to other treatment types in cancer patients.

Researchers Compare WGS, Exome Sequencing-Based Mendelian Disease Diagnosis

Investigators find a diagnostic edge for whole-genome sequencing, while highlighting the cost advantages and improving diagnostic rate of exome sequencing in EJHG.