Skip to main content
Premium Trial:

Request an Annual Quote

Roundtable:Technology for Tomorrow



Leslie Goodwin
Molecular Genetics Core Facility, Feinstein Institute for Medical Research

Susan Perkins
Institute of Comparative Genomics, American Museum of Natural History

Fabio Piano
Center for Comparative Functional Genomics, New York University

Daniel Spellman
Skirball Institute of Biomolecular Medicine, NYU School of Medicine

Early last month, scientists from four New York-area genomics and proteomics labs got together for a Genome Technology roundtable. This year’s theme is the lab of tomorrow: what will it take to keep your lab on the cutting edge with technology changing so rapidly? Our focus in this session was on automation and robotics, so we asked participants to tell us how they plan ahead for the latest technological innovations — all while avoiding bringing in technology that sounds exciting but will serve as a doorstop six months from now. With expertise in sequencing, mass spec, imaging, microarrays, PCR, and much more, our scientists hashed out when to solve a bottleneck, when to buy the newest toy, and gave free rein to their imaginations in thinking about technology that would prove useful to the field but doesn’t exist yet.

Genome Technology: Let’s start with brief descriptions of your labs and the technology you’re using.

Susan Perkins: In the last couple of years we’ve instituted an initiative — the [museum] trustees and the administration wanted to be more cutting edge and be part of the genomics revolution. They’ve developed the Institute of Comparative Genomics, and in the last year they’ve renovated our regular molecular biology lab into a genomics lab which we just moved into two weeks ago. They’re also renovating the former lab to make it more of a molecular systematics lab.

We currently have in our core a couple of 3730 sequencers and some basic liquid handling techniques. Being the museum, our main goal has been taxonomy systematics, but we are trying to do more in the realm of genomics. We’ve got some bacterial and viral whole- genome sequencing projects. It’s almost all sequence-based.

Fabio Piano: I’m at the biology department and the Center for Comparative Functional Genomics at NYU. We try to address questions of how do species use genomes, but also focus on the functional part of how these things are actually evolving. That’s the focus of the center. I study specifically embryogenesis in nematodes. We use that as a model system to both understand how embryos develop but also to try to understand some basic aspects of how developmental programs evolve between species. We’ve done a genome-wide RNAi screen and identified all the genes that are required for the processes of early embryogenesis. An important aspect of the center is to focus on imaging technologies.

Leslie Goodwin: I come from a very different vantage. We are a translational research paradigm and that is our great strength. We have a lot of clinical research that goes on as well as investigators that support the basic end of clinical or translational research. I run the core facility and my own lab at the institute.

As a core facility, we go from oligonucleotides all the way up to mass spec. We have the biorepository for AMDeC [a New York-based biomedical research consortium] that is all robot driven — it’s a Tecan robot and cherry picker. It is a fully automated repository that contains 300,000 DNA samples. We have an Affymetrix platform in which we’ve done an enormous number of gene expression arrays, as well as now moving into the 100K and 500K SNP chips. We have not really had to go to a liquid-handling robot yet but we’re so close to needing it. We are an evolving core facility.

[With] systems biology, the coordinating and the integration of all these technologies will wind up being one of the largest challenges to everybody.

Daniel Spellman: We’re both a research lab and a core facility. We are the protein analysis facility at the Skirball Institute for Biomolecular Medicine. The Skirball Institute is not so much translational research as medically oriented molecular research. We’re part of the structural biology department.

What we do is we have a few core grants: we’re a core for the cancer center, for the neuroscience institute, and we’re developing a clinical core for a separate cancer institute as well. We have a lot of strings pulling at us.

We have three major research areas in the lab. The first is what you might call targeted proteomics — both identifying components of a particular signaling complex and then we also try to do relative quantitation [to show] how is that complex changing over time. We’re very interested in how proteins are modified upon a stimulus or between two different states. We also have what would be called organellar proteomics: what are all the proteins that are actually in that particular organelle? A third thing would be biomarkers — is there some way we can take some sort of biological sample and do an analysis between groups of people and be able to pick out a very specific feature that [would indicate a] person having a disease or being more susceptible to a disease.

We try to focus on mass spectrometry-based proteomic technology. There’s always going to be questions that you can’t answer with existing technology and you need people that are working on developing new technology or developing new methodology that can be used with existing technology to answer specific questions.

Piano: Do you see the issues in your technology being the quantification of the samples?

Spellman: There’s a lot of issues there. First of all, there’s always new instrumentation that’s the next step in sensitivity. Separation techniques are also a very quickly developing field that help your sensitivity as well, so we try to keep on top of that. But you really have to balance what you can analyze. Right now we have roughly six mass spectrometers in the lab, and we acquire maybe on the order of 50 to 60 gigabytes of data a week — almost a terabyte every six months.

Genome Technology: How do you know when it’s time to get a new technology? How do you decide when to buy or whether to wait for something new to come out?

Perkins: Your postdocs tell you.

Goodwin: I think you just have a bottleneck, and all of a sudden you thought that you could process X number of samples in a much more expeditious manner than you are doing. The bottleneck isn’t the scanning, the bottleneck isn’t any other aspect, the bottleneck is the preparation of the samples.

Spellman: One problem we find with automation, though, is that often an automated process is going to give maybe not as good as a result. You have to balance — is automating things and getting higher throughput going to enable you to answer your questions better? Especially with us at a medical school we feel that the science really has to drive the technology. Can we answer this particular question better by upgrading our technology, or are we going to be able to ask a better question? Not just a new toy.

Piano: But I imagine that’s technology-specific. So automation of, say, DNA sequencing really improved the quality of the DNA sequence. What I’ve found in thinking of large-scale projects is that we spend a lot of time thinking how you are going to do it. Once you’ve made that decision then you are explicitly making a set of parameters that are going to go for the whole genome. A lot of times the high-throughput data is of good quality or generally better quality because you’ve spent a lot of time thinking about these decisions — you just don’t do an experiment right away.

Perkins: This has been a challenge for us recently with the new lab. We’re getting a new liquid handling robot. We’ve had one in use in the old lab, but it’s been hard to balance genome projects where you can have five or even 10 percent error — where the robot tip doesn’t fill efficiently and you lose that reaction — with a bunch of other people working in the lab where they’re doing multiple species, multiple genes, and so every single well is very important. That might represent a species that they went to the wilds of Australia to collect and they have a limited tissue sample and if that fails too many times they’ve lost that entire point. So we want to keep up the efficiency but keep up the accuracy.

Spellman: That’s a problem with us. We’ll have projects where we’ve spent maybe three to six months just generating the sample, so putting that in a robot’s hands takes a leap of faith.

Piano: We found that we were developing new technology or acquiring new technology when we had a bottleneck, but by definition when you solve one bottleneck, you just get another somewhere else. The other way is we take a little bit of risk. This can be done to a point that is not useful — you walk into some places where there’s a lot of machines and no one’s using them. In our case we really try to work [first] by hitting a bottleneck [and then going] for the technology, but we also see that maybe some technologies are not fully developed and you take a little bit of risk and you try the new technology.

Goodwin: I love what you have to say because I think lots of people denigrate the “if you build it, it will come” model. That’s very true — often that doesn’t work well; we’ve had doorstops that were mass specs. But often people certainly are mired in the way they’ve done it too so you do need some risk. You need somebody to come in and show you [that] we’re not dramatically changing everything you’ve thought about your experiment, but look how this is going to help facilitate it. There is an area for risk. People saying they want science to drive the technology — absolutely. But there is also a place for some of these toys to expand horizons to improve your throughput and your abilities and to bring you up to the next level.

Spellman: A lot of what happens with new technology is that you have to figure out a way to integrate it into your lab. So you think, ‘That’s exciting, it’s a new way to approach a question.’ [But] is it going to need a new software platform, are you going to have to develop your own software, do you need to bring in an outside specialist that maybe worked in the lab that developed this technology, is it from a company that has a pre-existing software that’ll fit in with your other systems? Getting a new technology and actually integrating it into your lab are two very different things.

Piano: Sometimes you don’t know how this is going to integrate and you want the [vendor] to help out by taking a little bit of risk with you.

Genome Technology: Looking forward, what sort of technologies would you like to invent that the field needs? What needs to be automated that isn’t now?

Perkins: I want the thing that they have on CSI: where they get their sample and have their sequence seven minutes later.

Goodwin: I’d like an automated ability to detect proteins and post-translational modifications; also being able to put a very small patch of skin or blood sample and tell them what their particular signaling profile is.

Piano: We span large-scale analysis with single-gene analyses [so] we find that having tools that help visualize the data that comes from the large-scale analyses — maybe mass spec results or microarray results or RNAi results — and put them into the context of pathways and modules or networks or whatever paradigm makes sense to start to visualize the dynamics of what’s going on inside cells. There’s some packages out there that help. Reactome is one open-source approach to this.

Perkins: One thing that we’ve been talking about in the last couple of weeks is we’ve been surprised that they haven’t developed much software in terms of tracking projects when you get into multiple species and isolates and samples. We had this little  fantasy conversation about being able to have a relational database where I’d set up a PCR and I’d have 20 species and just barcode them all and put the primers in and [it would] keep track of the information.


The Scan

UCLA Team Reports Cost-Effective Liquid Biopsy Approach for Cancer Detection

The researchers report in Nature Communications that their liquid biopsy approach has high specificity in detecting all- and early-stage cancers.

Machine Learning Improves Diagnostic Accuracy of Breast Cancer MRI, Study Shows

Combining machine learning with radiologists' interpretations further increased the diagnostic accuracy of MRIs for breast cancer, a Science Translational Medicine paper finds.

Genome Damage in Neurons Triggers Alzheimer's-Linked Inflammation

Neurons harboring increased DNA double-strand breaks activate microglia to lead to neuroinflammation like that seen in Alzheimer's disease, a new Science Advances study finds.

Long COVID-19 Susceptibility Clues Contained in Blood Plasma Proteome

A longitudinal study in eBioMedicine found weeks-long blood plasma proteome shifts after SARS-CoV-2 infection, along with proteomic signatures that appeared to coincide with long Covid risk.