Brigitta Tadmor, a molecular biologist trained at the Weizmann Institute in Israel, is the executive director of the Massachusetts Institute of Technology’s Computational and Systems Biology Initiative (CSBi), a university-wide education and research program using a multi-disciplinary approach to the systematic analysis and creation of predictive models of complex molecular biological activities.
CSBi includes about 40 faculty members from over 10 academic units across MIT’s Schools of Science and Engineering, the Sloan School of Management, and the Whitehead Institute for Biomedical Research.
The program has a list of technologies that it identifies as a shared platform of tools that are essential to advancing systems biology efforts. These include: high performance computing; imaging; microsystems; genomics and RNAi; modeling informatics; proteomics; and synthetic biology. Resources in this network include existing core labs, and dedicated technology development efforts.
BioCommerce Week spoke with Tadmor in August to see how one of the founding efforts in academic systems biology is progressing.
When one thinks of MIT, computing comes to mind. Is that the big challenge here?
We look at this as a collision between biology and engineering, an intellectual collision. Let me tell you that it is a very scary thought for some biologists that engineers may ultimately be the ones that move this field forward. Engineers are seen, at least at MIT, as playing an absolutely essential role as partners in terms of having hundreds of years of experience looking at complex systems.
The models that are being built here are coming out of systematic data sets. And we don’t think we have the data to build these models. We don’t believe that the world is full of data and you just take a bunch of computation biologists, or informaticists, to come in and figure it all out.
When did you first hear the term systems biology?
Probably when Leroy Hood set up the Institute for Systems Biology. That’s when I started to pay attention. But, I think it is very misleading to think of systems biology using any genomics terms — functional genomics, or any other ‘omic.’ It’s certainly not in silico biology, not mathematical biology, not theoretical biology. We have a good sense of what this is not.
What are the end points? Is it developing drugs, treatment of disease?
I think in terms of the impact, we all anticipate this is going to have an impact on the pharmaceutical industry, not just in the discovery, but at many stages in the multi-step process of how pharmaceuticals are developed. It will also likely have an impact on the manufacturing of materials, some stuff that is going to come out of synthetic biology — which is may be a subset of systems biology. I think the earlier impact is going to be in the medical area, in pharmaceutical R&D.
What are the tools that are needed?
So you have the normal suspects; we all need microarrays to get system-wide expression data. But it’s very structured data. Even some proteomics is rather structured at the moment. It’s an either-or, it’s an up-or-down, or its simple structured data. Very important technologies are available on the side of the informatics or on the device side. What we feel that is very lacking is that when you go to more unstructured data, such as data that are generated by images. And, if you take old-fashioned microscopy, we have no ways of capturing the data. There is data that exists in a notebook, and we don’t know how to digitalize it, how to compare it to, let’s say, microarray data. Those are the questions we talked about at IBM and those are the sorts of things that the large IT companies will think very hard about.
How did CSBi come about?
The precursor that led to the program was a DARPA project that involved about six investigators that had submitted to the NIH. They had the guts to basically tell NIH: ‘We’re going to develop models that will make decisions.’ And the NIH told them they were crazy. Then they went to DARPA and they said, ‘Yes, you are crazy, but we will order from you.’ That led to the center of excellence, which is outside the NIH Roadmap — it came out a year before the roadmap was officially published.
What have you learned in the past year?
The investigators we originally thought were going to be key to this program, it turned out to be not of interest to them. We brought people in who are in network theory of utility networks who want to start to thinking about how their expertise could apply to biological networks. And, we learned that people who might be very peripheral could ultimately participate in a very substantive way.
It is such a changing field at the moment that is really unclear that what models do apply. You have to find out. You can see that here with the changing nature of the faculty who are interested in applying their expertise to this field. I think it is important to understand that. In terms of, particularly on the computation side, the sort of tools that are developed, or even on the device side, a paradigm shift on the intellectual level typically requires a paradigm shift in the types of tools that are out. I think it is fair to say it is premature to commercialize anything in this field because the intellectual underpinnings have not been worked out. That is going to take a little bit longer to work out.
It’s not like genomics where Affymetrix could arrays and build a business — at least on arrays. The intellectual underpinning of genomics is molecular biology, and basically genomics was the scale-up. The paradigm was worked out — so you could develop the tools and we did see companies successfully develop tools. When I look at our microsystems engineers, the types of things they are thinking about is so tightly integrated with what the biologists need —what types of data, what types of measurements — so that the true development has been driven by the biological questions. That, I think, really goes hand-and-hand, and it can not be done in a way that is divorced.
You have identified a list of technology platforms as core to the CSBi efforts.
The platform we have is a large foundation, and the goal is to build an integrated technology platform with the broad areas that we either have expertise in, or have identified a gap that needs to be developed. Some are like microarrays, where people come and are told how to do it; to proteomics where we are developing radically new tools; and the same in imaging. But most important is microsystems engineering. That is where the disruptive technologies are going to come out where we will collect data in a very different way. These guys are thinking about ways of measuring things we haven’t seen in biology. They measure small mechanical changes or things we never think of like radioactive labels.
How will this technology be commercialized?
That is one of our mandates. Making an impact in industry is part of the mission at MIT.
Obviously the pharmaceutical industry is going to be the end user of anything that comes out of this field. So they will need to adopt some of the new paradigms. Along the way we need tool providers to develop the tools that can be used in pharma RD. We work with some of the companies in the technology development area that they understand how some of the technologies are used, what the issues are, what could one do if one had x, y, or z capability.
The companies that have capabilities of becoming integrators would be the most useful in this.
What is the state of progress in MIT’s systems efforts?
We have many years of modeling efforts by having half a dozen faculty thinking real hard about it for many years, but the models they are coming up with at this point are barely predictive in the sense that they gave you counterintuitive information that you experimentally test and find to be correct. We are going in the right direction.
The really huge problem is to pick a problem area where we can collect enough data, where we know enough about the system, where we have the experimental capacity to do additional experiments and build robust enough models.
How does systems biology differ from pharmacogenomics?
I think pharmacogenomics concept was built on genetics contributions. Know the blue print and you know how they are reacting to something. Systems biology expands on that and says genetics is one input but then the environment has many other inputs. While we need to take genetics seriously as one input, there are many other things. So what we do is look at network states that take everything into account. It looks at genetics, environmental conditions, whether someone else had another disease, what they were eating. The idea is that you build the model with those parameters in it and you would be able to ultimately predict if a patient would respond to a certain drug or not — not just by taking SNPs or a genetic marker, but maybe by looking at network markers. It’s a network-based concept of pharmacogenomics where clearly system biology in the area of oncology has gone. That is where it is going to happen sooner than later. We do hope that these models will be able to help make clinical decisions. That can certainly happen in the next five years or so.
At what rate of accuracy? Does it need to be 100 percent?
There are people who feel that you can’t model a system unless you know everything about every component in it. But, if that is the case, we would never be able to do it. There are too many things happening in cells; there are too many components. The challenge is to take enough components into account, to know enough about those, to know their pathways and build a network and, let’s say for the sense of argument, it reflects 80 percent of reality, and that would be robust enough to make predictions.
There is some hierarchy, and certain things will be more important but biology has reduncancy built into its system and not everything will push you into the edge. It is filtering through what are important happenings in nework changes that push a system over the edge into cancer, into pain, whatever.