This story originally ran on May 20.
Name: Jane Bearinger
Position: Senior scientist, Lawrence Livermore National Laboratory, 2009; medical technology program leader, Lawrence Livermore National Laboratory, 2008 to present
Background: Medical technology and biodetection group leader, Lawrence Livermore National Laboratory, 2006 to 2008
If mass spectrometry is the dominant technology in proteomics, and protein microarrays are still a niche tool, then protein nanoarrays exist more as a concept than an actual method.
While a number of different designs have been devised to make nanoarrays, they still suffer from a number of issues, including an insufficient number of high-performance probes, a lack of substrates that allow non-purified probes to be directly applied to the array surface, as well as dispenser speed and spot size. As a result, they are virtually unused by proteomics researchers.
In a study published April 29 in the Papers in Press section of Molecular and Cellular Proteomics, however, a team of scientists describe a method to make protein nanoarrays that they said is both rapid and inexpensive. Their nanoarrays are based on porphyrin-based photocatalytic nanolithography and may be used for proteomic screening of immobilized biomolecules; protein-protein interactions; and "biophysical and molecular biology studies involving spatially dictated ligand placement," they said.
In the MCP study, they said the arrays "could significantly advance the capabilities of, for example, quantitative proteomics," and that the technology "is well positioned to assist in the transition from microarray to nanoarray research, and may be used to obtain global proteome analysis or even small-scale, on-chip bioreactors."
ProteoMonitor spoke recently with Jane Bearinger, the corresponding author of the article, about the technology. Below is an edited version of the conversation.
Describe for me the protein array field and some of the roadblocks in the technology.
The cost of the application is one of the main factors [as is] the reproducibility, [and] the equipment needed to process the analysis of the results can also be an issue.
But I would say [that] in terms of the industrialization, the cost is one of the main issues. And if you're in a single lab, the reproducibility and the equipment you have is one of the primary issues.
One of the techniques that people use is dip-pen nanolithography, but if one of your quills breaks and you get four out of six of your spots … the overall design of the chip is not necessarily very robust.
If you're using imprint lithography or nanoimprint lithography and you don't have the very well-balanced air table and you're getting only some of your spots across, these kinds of things can affect the fidelity of the substrate. And so pretty much any of the techniques that are around today, whether it's dip-pen, nanoimprint, or any kind of contact imprint, or, for that matter, a photocatalytic technique, which is, compared to some of these others, still in its infancy, there are issues in terms of making them robust enough for commercialization and industrialization, to say, 'OK, how do we get to the, say, 90, 95 percent fidelity of each substrate produced without having the cost of getting 100 perfect ones be unreasonable?'
Are you a user of this technology? Is that what motivated you to try to improve protein array technology?
Actually no. My background is more of a surface scientist. I'm kind of a mutt of a chemical engineer and a material scientist that has primarily employed those skills toward biomedical issues.
Can you briefly describe this photocatalytic method?
I have a background in biomaterials, so I have on a number of different projects in my life tried to employ what they call bio-mimetic techniques, borrowing concepts that exist already in nature — [concepts that] Mother Nature's already kind of figured out — and exploiting them for whatever kind of industrial process, whether it be implant technology, or in this case potentially proteomic arrays.
I was looking at a way to improve photolithography and get around the fact that with traditional photolithography, it is the wavelength of light that sets your resolution.
For the phase 1 [I was interested in] setting aside the … water masks that people use to get around these things with the incredibly expensive technologies. … The basic chip in your computer today … can go down to 200 to 400 nanometers.
[ pagebreak ]
So instead of using those high light sources with UV energy and the glass and the photoresists and everything else, what I decided to do was look at photocatalytic semiconductors and photosensitizers.
I decided, 'Let's look at Mother Nature and take the model of the chlorophyll and its interaction with the sun.'
If we can take the sunlight and the plant knows how to take that chlorophyll and oxidizing component, how can I just put that into a mask, then put it on top of a substrate, so that I'm not limited by the resolution of light for the fidelity of my pattern? I'm limited only by getting the photosensitizer in close proximity to the surface that I want to chemically modify.
So I take these photosensitizers that fall into a basic category of porphyrins — chlorophyll is such an example of such a molecule. Hemoglobin has porphyrins in it as well, so there are elements of it in plants, there are elements already in our bodies.
I take [these molecules] …and put it in a volatile solvent such as ethanol … I'll take a three-dimensional polymeric mask, just meaning it has ridges and valleys, and basically swab it with a Q-Tip that has the alcohol solution of the porphyrin to put the porphyrin on the mask. Because the alcohol evaporates quickly, it leaves behind the porphyrin, and then with my bare hands I can take that mask and put it on top of a silicon substrate that has a thin layer of chemistry that I want to pattern.
And then … I can put the flashlight on top of the transparent mask where the flashlight activates the porphyrin molecules, and then where those ridges are, as opposed to the valleys, [it] touches the coating on top of the silicon and oxidatively decomposes it or oxidizes it away. Then when you take the mask off, a few seconds later you have your pattern chemistry of coating.
Is this something that someone who's not a chemical engineer can make on his own?
That's the goal, to have anybody in any lab to get the components and do this kind of thing.
How long would it take you to do this now that you know exactly what the steps are?
I should preface this: The ideas that are described in this paper grew out of my postdoc in Zurich [at the Federal Technical University]. … And when I came back to California I was trying to figure out ways of getting around the wavelength-of-light [issue], setting the resolution of photolithography. … When I started working with the porphyrins, I had within a year good micron-scale results, and then within another year, I was able to get down to the nanometer region.
How long would it take you now to manufacture one of these chips?
If I already have the mask, the polymeric mask, I can go into my lab and [in] the same day make a bunch of chips. … For the nanoarray work, there's one more extensive step of taking a silicon wafer and designing a master for the mask and for that part, I collaborated with the Stanford University nanofabrication lab to do e-beam for the master and then made multiple polymeric masks from the one master.
You mentioned this wavelength-of-light issue that you had to overcome. What is that?
With traditional photolithography, the masks have basically metal-covered regions and then just glass regions, so the traditional UV light that goes down through the mask doesn't penetrate where the metal layer is and penetrates just where the glass is.
So wherever the light goes through the glass and contacts the photoresist, that's where you can decompose it and make your pattern on a resist-coated wafer.
You can't make with traditional photolithography your chrome/glass pattern smaller than a few hundred nanometers because the light won't go down through that slit because the light's bigger. [But] with this technique using the porphyrins, it doesn't matter how small your ridges versus valleys are because the light … hits the entire thing. The activity is based from the porphyrin molecule itself creating the singlet oxygen.
I can use a very long wavelength and still get a nanometer-based result because I'm activating molecules. I don't have to have a wavelength of light go through a gap in metal.
You said that one of the issues with current protein nanoarrays is the cost issue. How much does it cost to make one of your arrays?
That's a fair question and to be direct, I'm not completely comfortable answering on the industrial scale, because I don't know that much about what things cost per chip [for mass production of microarrays].
[ pagebreak ]
What I can say is after you have the mask … I've always joked that this is nanolithography for pennies per capsule. All the reagents involved are extremely inexpensive. The glass slide as your substrate, the porphyrins that you can purchase commercially are available in bulk and a teeny bit lasts forever. Solvents are very cost-effective, and there are coatings that you can use … and those might be commercially purchased or come out of specific labs, so if it comes out of a specific lab, it might be based more on a collaboration.
But the technology from the glass slides, and the Q-Tip, and the porphyrin is extremely inexpensive.
One of the reasons you would want to do a nano protein array versus a microarray is that with microarrays, one of the problems is density requirements so you would need nanotechnology for that. But one of the problems with nanoarray technology is this requirement for a large number of high-performance probes. How does your technology tackle this?
The probes are the biomolecules that you're putting on top of your surface. I can't say that I'm addressing this. That's still a separate issue. I have a cheaper way of making the array itself, but I am not tackling the issue of the probes that then go on top of the array.
Is that something you'd be interested in tackling?
If I find the right collaborator, but not within my own lab. That's not something that's in my scope or capabilities.
What about reproducibility and sensitivity? Those were things that you mentioned before.
The reproducibility, once you have characterized your mask, is very high. It's a very robust, inexpensive technique. The sensitivity right now is kind of a … function of the feature size. When I started doing this work, I started doing it on a micron scale to tell cells and proteins where to go … so the sensitivity in terms of putting the biomolecules down, in terms of the optics that are required to visualize and capture them, are really available in any biology lab today.
Once you go down to the nano scale, there are issues in terms of signal-noise and having the right kinds of detectors to be able to see your signal above background. One of the things I point out in this paper is I just put a single concentration of proteins down that I knew would basically saturate the detector that we were using.
We were using a conventional microscopy camera and not a cryogenically cooled one. For ballpark spot sizes of 600 nanometers and up, sensitivity is really not that much of an issue for anybody's lab with conventional bioimaging equipment. But if you are working with spot sizes between 200 and 500 nanometers, experiments are not necessarily straightforward because recorded detector counts are not necessarily well above background.
In that case with that small of a spot size, it does help if you have a cryogenically cooled detector. That's something that your average lab versus an industrial, fancy set-up is going to determine where you're trying to operate.
If you're doing high-throughput things in your company and you're always between 200 and 500 [nanometers] and you have your set-up with cryogenically cooled detectors, you're fine.
If you're the PI of an academic lab and you have just traditional biomedical imaging equipment, it's going to be a lot easier to stay at 600 nanometers and above.
How many proteins can these arrays handle?
I don't have a direct answer for you there.
My understanding is that with the typical protein nanoarray, it can handle somewhere around 10 proteins.
That's a matter of how you set it up and how you divide your proteins and how many spots per protein that you have. There's the [Mike] Snyder group biochip where they have I think around 1,000 proteins, but the spot sizes are at least 100 microns each.
When you go down in size, one of the issues you then have is you can't use traditional spotting techniques to put your solution of proteins on top of it. … One of the things I'm working on is ways of combining some traditional lithography to set up little walls, if you will, so that you can then have sections of nanoarrays and still separate your protein 1 from protein 25 and therefore get fidelity in terms of X number of spots per protein and multiplex it to ideally starting at something like 100 proteins.
Have you done any scientific work with these arrays?
I have not started working with specific proteins and the level of protein concentration and analysis of the linear detection rate, so the direct answer there is no.
The next set of experiments should be addressing the range of protein concentration and analysis of the linear detection range so that we can go onto the next step similar to other groups.